text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Debugging the ribbon
I've inherited a farm where previous "admins" decided to make undocumented changes to the file system to protect users under the guise of governance. This meant removing ribbon options like Workflow settings and anything related to Sharepoint Designer.
I was able to acquire a stock CMDUI.xml file from an untouched farm and replace that in our farm where modifications were made. IIS was reset, browser cache was cleared and the buttons do not reappear.
This leads me to believe there might be customizations elsewhere. What other files in the hive have an influence over the ribbon that I can check? Would ONET alterations cause this?
Note there is a custom master page, however this happens on sites without it too, as well as freshly created site collections. There are no user controls or custom javascript or CSS being introduced in the master page that affect the ribbon.
Edit: I've reviewed Andrew's article on MSDN and none of this is being employed in the farm, leading me to believe this is all file system driven.
A:
First check the source of the affected page in browser. If it does not contains buttons then they are removed on server side using HideCustomAction, custom control added to the page or masterpage, CMDUI.xml, ...
If the button definition is in the page source but is not visible then it's done by JS. Each button has it's own ID like Ribbon.Library.Settings.ManageWorkflows-Large. So as the next step i would search all JS files in SP hive containing e.g. Ribbon.Library.Settings.ManageWorkflows.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can bad cabling affect the overall network?
TLDR: Can a handful of problematic lines negatively impact the performance of the network as a whole? ... or would any problems stop once it reaches the switch?
Longer Version (background):
A school I support was recently modernized. They had professional cable installers run all new Cat 6 lines. Construction in general ran from April 2017 until roughly Dec 2017 - and the cabling was installed and certified in phases, with the last stage in October 2017.
In September (upon school resuming) we noticed they goofed by installing the patch panels in the back of the rack instead of the front. Due to the rack and room configuration, it was necessary for them to relocate the patch panels to the front of the rack.
Their installers came out and spent the day unpacking the cable bundles, and moving them to the front of the rack. Afterward they re-bundled the cables: from the back of the keystone jack it was secured to a cable support rod (which was attached to the 1U patch panel). The 1U patch panel's bundle was then bundled with its neighbors, fastened to the rack, which in turn became part of the 10-inch diameter bundle going into the ceiling (which is suspended by a cable hook before entering the cable trays)
We had a problem with one of our Unifi access points showing up as 100FDX instead of Gigabit. I did the usual troubleshooting. Despite having good continuity on all pairs, it was determined (through a process of elimination) that the problem had to be with the cabling / termination. The installers were contacted - they came out and tested - confirmed it was the line - repunched it .. and everything is working properly now.
When this problem first surfaced, I observed the keystone jacks from behind and saw that a number of them did not have the termination cap locked on - it has snapped loose during the cable relocation. After they re-punched the line, it was of course locked. I counted approximately 6-8 jacks that have their termination cap loose .. and another 10 that may have it loose (its hard to tell visually). I don't want to lay a finger on their cabling until after the warranty is up in a year.
If those lines were certified after each phase - the certification report demonstrated that the cabling and termination was to specification at the time the certification was run. But since they did not re-certify after moving the patch panels .. and we have now found a problem with a line that was previously certified .. wouldn't that call into question the entire certification report? Shouldn't they have recertified?
When the school corresponded with the network installer, the school was told that there was "absolutely no need to test cables" .. since it was only a single failure, and there have been no other issues with the system since the relocation of the patch panels ...
I know from experience that "not having issues" does not mean that there are not, or will not be problems. You can have full continuity on all 4 pairs .. and still fail certification. You will still be able to communicate on the line .. if the ethernet frames fail checksum and are discarded .. you'll get an increase in dropped packets with TCP .. and more Tx / Rx retries ... which in turn can slow down the communication on that line .. I'm guessing considerably.
But would the problem only be localized to that line - or could it affect the network as a whole? If so - would it be negligible on the whole, or a multiplicative effect?
If it is likely to be a multiplicative and substantial effect - do you have any links / resources you could refer me to, so I may read up further and reference it to the school?
I don't know if its needed to answer the question - but just in case .. we're using a 4 unit switch stack (functioning as one logical unit) .. Cisco SG500-52MP
Thanks in advance for your time and expertise ...
A:
A bad link will primarily just affect the communication between the two devices. However, there are some secondary effects that may come into play due to intermittent link loss or loss of important frames:
vital services through the link may become unreachable (DHCP, routing protocols, router discovery, ...)
spanning tree may play up with a flapping link, depending on how carefully your STP deployment was designed
You certainly don't want unreliable links in your network. Monitor each one for FCS errors, runts, or giants. Gigabit links are bidirectional on all pairs, so monitoring one direction should be enough. 10/100 Mbit links need to be checked in both directions.
When you've got cabling installed you need to make sure that it's up to spec. Getting certification for each link is essential, significant changes like moving the panels require recertification of the links. A simple check for continuity or shorts is a quick first diagnosis but passing that doesn't guarantee the cable to actually work at all.
| {
"pile_set_name": "StackExchange"
} |
Q:
Android - MalformedURLException on HTTP request
I'm fairly okay at programming so this is a wierd one for me...
I'm attempting to pull a value from Mantis Bug Tracker to display on my Android app. To do this I'm accessing Mantis' SOAP interface using the kSOAP2 library.
The error i get is a MalformedURLException with these details
Protocol not found: /api/soap/mantisconnect.php
The URL is built from a user entered URI with a String attached onto the end:
String fullUrl = HOST + "/api/soap/mantisconnect.php/mc_version?wsdl";
trans.call(fullUrl, soapEnvelope);
HOST is passed in successfully, i have seen this through debugging variables and fullUrl equates to:
http://192.168.1.98/mantis/api/soap/mantisconnect.php/mc_version?wsdl
This is copied right out of the debug view for the HOST variable in Eclipse.
The point at which the program errors out is at trans.call(); where trans is an HTTPTransportSE object provided in the kSOAP2 libraries. The URL that is defined in trans is (oddly?):
/api/soap/mantisconnect.php
So to me this problem appears to lie in a parse issue within the HttpTransportSE object, for which source can be found here extending transport.java (i can't put a link as it thinks im spamming but you can find it in that SVN).
Normally by this point, i've spotted the problem and never have to hit the submit button but i'm having real issues with this =(. All help welcome & necessary!
A:
Have you tried it without the HOST variable and just hard code the URL?
| {
"pile_set_name": "StackExchange"
} |
Q:
interactive map with leafletR variable point size
I create an interactive map as follows:
library(leafletR)
data(quakes)
# store data in GeoJSON file (just a subset here)
q.dat <- toGeoJSON(data=quakes[1:99,], dest=tempdir(), name="quakes")
# make style based on quake magnitude
q.style <- styleGrad(prop="mag", breaks=seq(4, 6.5, by=0.5), style.val=rev(heat.colors(5)), leg="Richter Magnitude", fill.alpha=0.7, rad=8)
# create map
q.map <- leaflet(data=q.dat, dest=tempdir(), title="Fiji Earthquakes", base.map="osm", style=q.style, popup="mag")
# view map in browser
rstudio::viewer(q.map)
Now, I want to make the size of the circle dependent on another variable. Let's say the variable 'stations'. How can I do this? If it is not possible with this package, I am open to use another package ... as long as I can put a legend, the map is interactive, a pop-up appear when clicked on and the color can depend on the value of a continuous variable.
A:
I read through the documentation for the leafletR package, and it seems to me (and I could be wrong) that the current version doesn't support multiple styles for the same dataset. They give a few examples where they combine 2 styleSingles by listing them (e.g. style=list(sty.1, sty.2)), but that only works in conjunction with listing 2 different datasets (see P.8 in the document for more details). I tried various tricks, but none of them worked for me.
However, I came up with a hacky solution that you might want to try. After the html page is created using the leaflet() function, you can edit the Javascript code that handles the styling to make the radius property dynamic (this could also work for the other styling properties, such as fill, alpha, etc.).
What you need to know:
In the HTML document that leaflet creates, search for the definition of the style1(feature) function. You should find the following segment of code:
function style1(feature) {
return {"color": getValue(feature.properties.mag),
"fillOpacity": 0.7,
"radius": 8};
}
This function basically returns the style for each record in your dataset. As you can see, the function in its current form returns a static value for fillOpacity and radius. However, when it comes to color, it calls another function called getValue and passes it the mag (magnitude) property. If we take a look at the definition of the getValue function, we'll see that it simply defines the magnitude ranges for each color:
function getValue(x) {
return x >= 6.5 ? "#808080" :
x >= 6 ? "#FF0000" :
x >= 5.5 ? "#FF5500" :
x >= 5 ? "#FFAA00" :
x >= 4.5 ? "#FFFF00" :
x >= 4 ? "#FFFF80" :
"#808080";
}
The function definition is really simple. If x (the magnitude in this case) is greater or equal to 6.5, then the color of that data point will be "#808080". If it's between 6 and 6.5, then the color will be #FF0000". And so on and so forth.
What you can do:
Now that we see how the Javascript code handles how the colors are assigned to each data point, we can do something similar for all the other styling properties with very minimal effort. The following code segment, for instance, shows how you can make the radius dynamic based on the count of stations in the area:
/* The getValue function controls the color of the data points */
function getValue(x) {
return x >= 6.5 ? "#808080" :
x >= 6 ? "#FF0000" :
x >= 5.5 ? "#FF5500" :
x >= 5 ? "#FFAA00" :
x >= 4.5 ? "#FFFF00" :
x >= 4 ? "#FFFF80" :
"#808080";
}
/* The getRadValue function controls the radius of the data points */
function getRadValue(x) {
return x >= 100 ? 24 :
x >= 80 ? 20 :
x >= 60 ? 16 :
x >= 40 ? 12 :
8;
}
/* The updated definition of the style1 function */
function style1(feature) {
return {"color": getValue(feature.properties.mag),
"fillOpacity": 0.7,
"radius": getRadValue(feature.properties.stations)
};
}
So, with the new definition of style1(feature), now we can control both the color as well as the radius of the data points. The result of the code modification looks like this:
The good thing about this approach is that it gives you more fine-grained control over the styling properties and the range of values that they can have. The major draw-back is going to be that if you want to add a legend for those properties, then you'll have to do that manually. The logic for adding/editing the legend should be at the very bottom of the HTML document, and if you know Javascript/HTML/CSS, editing that code segment shouldn't be too difficult.
Update:
To add a legend for the new dynamic variable (in our case, the radius), you need to edit the .onAdd handler that's attached to the legend object. As I said before, the definition for this handler is usually at the bottom of the html page, and if we run the bit of code that you provided in your question, then the handler should look like this:
legend.onAdd = function (map) {
var div = L.DomUtil.create('div', 'legend');
var labels = [];
var grades = [4, 4.5, 5, 5.5, 6, 6.5];
div.innerHTML += 'Richter Magnitude<br>';
for (var i = 0; i < grades.length - 1; i++) {
div.innerHTML += '<i style="background:' + getValue(grades[i]) + '"></i> ' + grades[i] + '–' + grades[i + 1] + '<br>';
}
return div;
};
The above code simply loops through the range of values for the magnitude, and creates a box (with the appropriate color, referencing the getValue function that we looked at before) and a label. If you want to create something similar for the stations variable, let's say, we can use the same logic above. Though in this case instead of varying the color, we'll be varying the size of the circle. The following segment of code shows how to achieve that:
legend.onAdd = function (map) {
var div = L.DomUtil.create('div', 'legend');
var labels = [];
var grades = [4, 4.5, 5, 5.5, 6, 6.5];
div.innerHTML += 'Richter Magnitude<br>';
for (var i = 0; i < grades.length - 1; i++) {
div.innerHTML += '<i style="background:' + getValue(grades[i]) + '"></i> ' + grades[i] + '–' + grades[i + 1] + '<br>';
}
// Adding the range of possible of values that the variable might take
// This should be in sync with the range of values you considered in
// the getRadValue function.
var rad_grades = [40, 60, 80, 100];
// The title for this section of the legend
div.innerHTML += 'Number of stations<br>'
for (var i = 0; i < rad_grades.length - 1; i++) {
div.innerHTML += '<table style="border: none;"><tr><td class="circle" style="width: ' +
(getRadValue(rad_grades[rad_grades.length - 2]) * 2 + 6) + 'px;"><svg style="width: ' +
(getRadValue(rad_grades[i]) * 2 + 6) + 'px; height: ' + (getRadValue(rad_grades[i]) * 2 + 6) +
'px;" xmlns="http://www.w3.org/2000/svg" version="1.1"><circle cx="' + (getRadValue(rad_grades[i]) + 3) + '" cy="' +
(getRadValue(rad_grades[i]) + 3) + '" r="' + getRadValue(rad_grades[i]) + '" /></svg></td><td class="value">' +
rad_grades[i] + '–' + rad_grades[i + 1] + '</td></tr></table>';
}
return div;
};
As you can see, the type of styling property we're controlling will determine how we're specifying it in the legend. If you want to add a legend for the alpha property, for instance, then you might want to try some other approach other than using circles and controlling their width and height. The end result of the code modifications above looks like this:
Also, if you want to include the number of stations in the popup, then you'll have to edit the onEachFeature function. It's going to be the same approach we took with all the other modifications, and it's a really simple change.
The onEachFeature function looks like this in the original HTML:
function onEachFeature(feature, layer) {
if (feature.properties && feature.properties.mag) {
layer.bindPopup("mag: " + feature.properties.mag);
}
}
If you want to include the number of stations in the popup too, then you need to include it in the argument to the bindPopup method, as follows:
function onEachFeature(feature, layer) {
if (feature.properties && feature.properties.mag && feature.properties.stations) {
layer.bindPopup("mag: " + feature.properties.mag + "<br> # Stations: " + feature.properties.stations);
}
}
The end result of this change is the following:
Hope this helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java - What happens when child thread dies from NPE , does parent thread get killed
I have a main thread in android and it has spawned a child thread(an intentService but it doesn't really matter its just a thread). Lets say the child thread gets a uncaught null pointer exception then my question is does the main thread die or only the child thread ? can the process continue ?
A:
If a thread exits due to an error, the JVM will continue execution as long as there exists other non-daemon threads. Daemon threads are like normal threads, but do not keep the JVM alive if there only daemon threads still alive. The JVM usually keeps a lot of daemon threads around, including the GC and finalizer threads, to do maintenance and process signals from the OS.
| {
"pile_set_name": "StackExchange"
} |
Q:
this.innerHTML doesnt works for links?
I want to get the following value: I want this text.
function get(v){
alert(v);
}
<a href="javascript:get(this.innerHTML);">I want this text</a>
I tried many combinations of this, this.innerHTML, etc.. They all return undefined...
Any tips?
A:
Try using the onclick handler instead:
function get(v){
alert(v);
}
<a onclick="get(this.innerHTML); return false" href="#">I want this text</a>
| {
"pile_set_name": "StackExchange"
} |
Q:
Slow writing on SQLite database after ALTER
I have a relatively big SQLite database (~900 MB) with 3 tables inside. I recently removed a field from the main table, through the software SQLlitebrowser. I used the answer found here: Delete column from SQLite table
And I basically ran this:
BEGIN TRANSACTION;
CREATE TEMPORARY TABLE t1_backup(id, percentage_match, doi, title, date, journal, authors, abstract, graphical_abstract, liked, url, new, topic_simple, author_simple);
INSERT INTO t1_backup SELECT id, percentage_match, doi, title, date, journal, authors, abstract, graphical_abstract, liked, url, new, topic_simple, author_simple FROM papers;
DROP TABLE papers;
CREATE TABLE papers(id, percentage_match, doi, title, date, journal, authors, abstract, graphical_abstract, liked, url, new, topic_simple, author_simple);
INSERT INTO papers SELECT id, percentage_match, doi, title, date, journal, authors, abstract, graphical_abstract, liked, url, new, topic_simple, author_simple FROM t1_backup;
DROP TABLE t1_backup;
COMMIT;
I also tried to run that query through a python script.
After the alteration of the database, writing into the main table is extremely slow.
When I try to write on the original database (I made a backup before removing the field), I can write into the main table more or less 10000 times faster.
Do you have an idea about why this is happening ? I tried to vacuum the newly altered database, but the loss of performance is still here.
EDIT:
@neuhaus: I ran .schema (first time I hear about it, forgive me if I misused it) on the new table:
sqlite> .schema papers
CREATE TABLE papers(id, percentage_match, doi, title, date, journal, authors, abstract, graphical_abstract, liked, url, new, topic_simple, author_simple);
And on the old table:
sqlite> .schema papers
CREATE TABLE IF NOT EXISTS "papers" (
`id` INTEGER PRIMARY KEY AUTOINCREMENT,
`percentage_match` REAL,
`doi` TEXT,
`title` TEXT,
`date` TEXT,
`journal` TEXT,
`authors` TEXT,
`abstract` TEXT,
`graphical_abstract` TEXT,
`liked` INTEGER,
`url` TEXT,
`new` INTEGER,
`topic_simple` TEXT,
`author_simple` TEXT
, url_image TEXT);
It's obvious there is a difference. It seems the types of the fields are not present in the new table. What do you think, and how do I correct that ? ?
A:
Compare the schema of the old table and the new table using the sqlite3 utility and its .schema command.
The new table is likely to be missing an index.
Use the CREATE TABLE statement given to you by the output of the .schema command (without the column you want to drop) instead of the one you are using now.
| {
"pile_set_name": "StackExchange"
} |
Q:
What do the truck's loudspeakers say in HBO's Chernobyl?
Starting at about 1:22 in the trailer, an announcement starts broadcasting on repeat. It is presumably coming from the truck we see a couple of seconds later. What is this announcement?
A:
This seems to be the beginning of the evacuation announcement which you can also listen to, repeating the first two words, which sound like a repetion of внимание (engl. attention) - there is an audio clip, too.
So, in the trailer, it's just
Attention, Attention
The original announcement (April 27, 1986) then went on
For the attention of the residents of Pripyat! The City Council informs you that due to the accident at Chernobyl Power Station in the city of Pripyat the radioactive conditions in the vicinity are deteriorating. [...] Nevertheless, with the view to keep people as safe and healthy as possible, the children being top priority, we need to temporarily evacuate the citizens in the nearest towns of Kiev region. [...]
This can now be confirmed, since John August uploaded the scripts (Craig Mazin, 2018, HBO) to his site. The truck appears in episode 2 Please Remain Calm, in scenes 252 and 253 (pp. 45-49):
Two times
Attention, Attention
Then the announcement
FOR THE ATTENTION OF THE RESIDENTS OF PRIPYAT. [...]
as already quoted above.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pointer to pointer parameter in Swift function
How do you do pointers to pointers in Swift? In Objective-C I had a function which I would call recursively so that I could keep track of the number of recursions, but I'm stumped as to how to achieve this in Swift.
NSNumber *recursionCount = [NSNumber numberWithInteger:-1];
[self doRecursion:&recursionCount];
- (void)doRecursion:(NSNumber **)recursionCount {
// I'm sure this is a terrible way to increment, but I have limited Objective-C experience
// and this worked, so I moved on after failing to find a decent var++ equivalent :-(
int addThis = (int)i + 1;
*recursionCount = [NSNumber numberWithInt:[*recursionCount intValue] + addThis];
[self doRecursion:recursionCount];
}
In the process of cutting this sample down for this post, I've ended up creating a never-ending loop, but you get the idea on how I'm remembering the value with each recursion.
Does anybody know how to do this in Swift? Thanks.
A:
Usage of pointers in swift is highly discouraged.
To change a variable passed as argument to a function, you have to pass it by reference (similar to passing its pointer) using the inout modifier. I've changed the logic of your function to stop after 10 iterations:
func doRecursion(inout recursionCount: Int) -> Int {
if (recursionCount < 10) {
++recursionCount
doRecursion(&recursionCount)
}
return recursionCount
}
Then you can call using:
var counter: Int = -1
let x = doRecursion(&counter) // Prints 10 in playground
Suggested reading: In-Out Parameters
| {
"pile_set_name": "StackExchange"
} |
Q:
print() function's end argument
If I have:
for x in range(10):
time.sleep(1)
print('x')
it will print ‘x’ every second, 10 times, each on its own line.
However, if I change it to be print('x', end='') to make the ‘x’ print all on the same line, the script appears to do nothing for 10 seconds and then dumps all 10 x’s at once.
Why?
A:
Line buffering. It's waiting for a new line character before flushing stdout.
Try this instead:
print('x', end='', flush=True)
| {
"pile_set_name": "StackExchange"
} |
Q:
What does "take a lap out of each day" mean in this context?
What does the "lap out" mean??
I'd highly recommend for you to get out there and film your riding. Everyone has access to a video camera these days whether it's a Gopro or just the camera on your smart phone.Go up with a buddy, take a lap out of each day and file each other so you can go home and see what you look like.This is going to do wonders for your riding. You will pick up so much stuff that you did not realize was happening and you can compare your riding to mine or other pros that you want to ride like.
PS: https://tw.voicetube.com/videos/31641#video-container <-- I've studying for this web site.the question sentence is between 0:56 and 0:68
please give me more sentences, thanks?
A:
I'm expanding the comment I made above. Note that this source is not particularly idiomatic or even correct English, so it's not necessarily a good guide for learning colloquial expressions.
That said, you're being confused by thinking that "lap out" is an expression: it is not. The expression you need to understand is take [a time period] out of each day. For example, I might write:
Take ten minutes out of each day to practice your vocabulary.
This just says that you should stop doing everything else for ten minutes every day and devote that ten minutes to doing a particular thing (practicing your vocabulary).
In your example, the time period you're being asked to reserve is one lap, which means "a complete circuit of a course in racing." If you're out snowboarding with a friend, you are both going to do multiple laps of the snowboarding course throughout the day. Note that this is not really colloquial - usually you do runs in snowboarding, not laps, because a lap is almost always a circular route.
This video is advising you to take one lap out of each day. That is, stop snowboarding for one lap (or run) and instead, video your friend doing his lap (or run). I am sure that "file each other" is a mistake and it should be "film each other". If you both do this for each other, you will each have a video of your performance to review for style and technique.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to test a rails PORO called from controller
I have extracted part of my Foos controller into a new rails model to perform the action:
foos_controller.rb
class FoosController < ApplicationController
respond_to :js
def create
@foo = current_user.do_something(@bar)
actioned_bar = ActionedBar.new(@bar)
actioned_bar.create
respond_with @bar
end
actioned_bar.rb
class ActionedBar
def initialize(bar)
@bar = bar
end
def create
if @bar.check?
# do something
end
end
end
I got it working first but now I'm trying to back-fill the rspec controller tests.
I'll be testing the various model methods and will be doing a feature test to make sure it's ok from that point of view but I would like to add a test to make sure the new actioned_bar model is called from the foos controller with @bar.
I know in rspec you can test that something receives something with some arguments but I'm struggling to get this to work.
it "calls ActionedBar.new(bar)" do
bar = create(:bar)
expect(ActionedBar).to receive(:new)
xhr :post, :create, bar_id: bar.id
end
This doesn't work though, the console reports:
NoMethodError:
undefined method `create' for nil:NilClass
which is strange because it only does this when I use expect(ActionedBar).to receive(:new), the rest of the controller tests work fine.
If I try to do:
it "calls ActionedBar.new(bar)" do
bar = create(:bar)
actioned_bar = ActionedBar.new(bar)
expect(actioned_bar).to receive(:create).with(no_args)
xhr :post, :create, bar_id: bar.id
end
the console says:
(#<ActionedBar:0xc8f9f74>).create(no args)
expected: 1 time with no arguments
received: 0 times with no arguments
If I do a put in the controller whilst running the test; for some reason this test causes the actioned_bar in the controller to be output as nil but fine for all the other controller tests.
Is there any way I can test that ActionedBar is being called in this controller spec?
A:
You can use expect_any_instance_of(ActionedBar).to receive(:create), because instance in spec and in controller are different instances.
If you want to use original object, you can use expect(ActionedBar).to receive(:new).and_call_original (without that #new just will return nil and you'll get NoMethodError).
| {
"pile_set_name": "StackExchange"
} |
Q:
Using sed to write xml tag in Rakefile
I have trying to pre-process a xml-like file by using Rakefile, what I am trying to do is adding a group of xml tags.
The following sed is the short version of what I have done
sed -ig '/TARGET_STRING/{N;G;s/$/<key>KEY_NAME<\/key>/g;}' whateverfile.xml
and this piece of code is worked beautifully and successes while using terminal.
And I put them into the Rakefile I made, like this:
desc 'setup pods archs'
task :setup_podsarchs => :setup_submodules do
puts 'Altering xml...'.cyan
`sed -ig '/TARGET_STRING/{N;G;s/$/<key>KEY_NAME<\/key>/g;}' whateverfile.xml`
end
After executing rake, it prompt an error and terminate the task
sed: 1: "/TARGET_STRING/{N;G;s/$/ ...": bad flag in substitute command: 'k'
I had searching around for a long time, cannot find any information about escaping the < and > characters in Ruby.
My platform
OS: Mac OS X 10.9
Ruby: 2.0.0p247
rake: 0.9.6
sed: 7
Update
Hi, thank you guys for the extremely fast reply.
and @the Tin Man,
for the comment,
What I am trying to do is pre-process the Xcode project file (.pbxproj), which is structured as a xml,
For simplicity, I just show the example of xml structure here:
<dict>
<key>Key_ONE</key>
<string>1</string>
</dict>
What I am trying to do is finding the KEY_ONE and adding another key after that:
<dict>
<key>Key_ONE</key>
<string>1</string>
<key>Key_TWO</key>
<string>2</string>
</dict>
A:
Using regular expressions for anything beyond the most trivial and controlled parsing leads to madness. Use Nokogiri, an excellent Ruby XML/HTML parser. For instance:
require 'nokogiri'
xml = <<EOT
<xml>
<foo>foo</foo>
<bar>bar</bar>
</xml>
EOT
doc = Nokogiri::XML(xml)
doc.at('foo').content = 'bar'
doc.at('bar')['class'] = 'cyan'
puts doc.to_xml
Which outputs:
<?xml version="1.0"?>
<xml>
<foo>bar</foo>
<bar class="cyan">bar</bar>
</xml>
Notice the content inside the <foo> tag changed, along with <bar> gaining an attribute.
What's important about using a parser is that the content can change, tag parameters can change, their order can move around inside the tag, tags can be split across multiple lines, and a parser will not care, whereas a regular expression will spout flames and stop working.
| {
"pile_set_name": "StackExchange"
} |
Q:
R k mean and heircal clustering takes forever time to finish
I have a data set (after normalising and preprocessing) contains a data frame that has 5 columns and 133763 rows.
I am trying to apply k means algorithm, and herical algorithm in order to do the clustering. However, my problem is that R studio keeps trying to do the calculation, and then it throws out of memory exception even though i am using mac bro i 7, 16 gb
my code for heroical clustering is:
dist.cards<-dist(cardsNorm)
as i said that takes forever running. however, if i did this
dist.cards<-dist(cardsNorm[1:10])
it works fine, that is because i just use 10 rows.
for the k mean, this is my code:
cardsKMS<-kmeans(cardsNorm,centers=3,iter.max = 100,nstart = 25)
it works fine, but when i try to measure the model using this code
a <- silhouette(cardsKMS$cluster,dist(cardsNorm))
it takes forever and never finishes calculating
help please
A:
Creating a distance matrix between n = 133763 observations requires (n^2-n)/2 pairwise comparisons. Given that a scalar numeric requires 12 bytes of RAM the entire matrix requires about 100 GB. So unfortunately you don't have enough.
Algorithms based on distance matrices scale very poorly with increased data set size (since they are inherently quadratic in memory and CPU) so I am afraid you need to try some other clustering algorithm.
| {
"pile_set_name": "StackExchange"
} |
Q:
Use Tesseract 4 - Docker Container from uwsgi-nginx-flask-docker
I had my python project running local, and it works. I use tesseract from python with the subprocess package.
Then I deployed my project and since I use Flask, I installed tiangolo-uwsgi-flask-nginx-docker but, Tesseract isn't installed there. That's why my project doesn't work anymore because it cannot find tesseract. And it doesn't recognize the tesseract that is installed on my AWS instance because tesseract isn't installed in the docker container.
That's why I would like to use also tesseract 4 Docker which has an installation of Tesseract.
I have both Dockers:
c82b61361992 tesseractshadow/tesseract4re:latest "/bin/bash" 6 seconds ago Up 5 seconds t4re
e122633ef81c my_project:latest "/entrypoint.sh /sta 35 minutes ago Up 35 minutes 0.0.0.0:80->80/tcp, 443/tcp modest_perlman
But I don't know how to tell my_projectthat it has to take Tesseract from the Tesseract Container.
I read this post about connecting two Docker containers, but I get even more lost. :)
I saw that the Tesseract Docker should work this way:
#!/bin/bash
docker ps -f name=t4re
TASK_TMP_DIR=TASK_$$_$(date +"%N")
echo "====== TASK $TASK_TMP_DIR started ======"
docker exec -it t4re mkdir \-p ./$TASK_TMP_DIR/
docker cp ./ocr-files/phototest.tif t4re:/home/work/$TASK_TMP_DIR/
docker exec -it t4re /bin/bash -c "mkdir -p ./$TASK_TMP_DIR/out/; cd ./$TASK_TMP_DIR/out/; tesseract ../phototest.tif phototest -l eng --psm 1 --oem 2 txt pdf hocr"
mkdir -p ./ocr-files/output/$TASK_TMP_DIR/
docker cp t4re:/home/work/$TASK_TMP_DIR/out/ ./ocr-files/output/$TASK_TMP_DIR/
docker exec -it t4re rm \-r ./$TASK_TMP_DIR/
docker exec -it t4re ls
echo "====== Result files was copied to ./ocr-files/output/$TASK_TMP_DIR/ ======"
But I've no clue, how to implement it in my python script and from the other container.
My python-tesseract script looks quite similar to pytesseract.py I just changed a few lines and deleted some stuff I don't need.
Maybe someone knows how to do this, or could propose another better way to use tesseract with the tiangolo-docker
A:
EDIT
(See the edit below)
I found the answer. Since it would work for every two docker containers, I'm gonna write a general solution which one can always use.
I have both docker images and containers in the same instance:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
14524d364cff (image) "java -jar ..." 40 hours ago Up 40 hours 0.0.0.0:5000->5000/tcp api-1
3392994ae3ac (image) "java -jar ..." 40 hours ago Up 40 hours 0.0.0.0:5002->5002/tcp api-2
Until here it's easy.
Then, I wrote a docker-compose.yml
version: '2'
services:
api-1:
image: _name-of-image_
container_name: api-1
ports:
- "5000:5000"
depends_on:
- api-2
api-2:
image: _name-of-image_
container_name: api-2
ports:
- "5002:5002"
Then, in the docker file of api-1, for example.
...
ENV API-2HOST api-2
...
and that's it.
In my particular case, I have an api-1.conf with:
accounts = {
http = {
host = "localhost"
host = ${?API-2HOST}
port = 5002
poolBufferSize = 100
routes = {
authentication = "/authentication"
login = "/login/"
logout = "/logout"
refreshTokens = "/refreshTokens"
}
}
}
and then I can easily make a request there and so are both docker containers communicated.
Hope it can help someone.
EDIT
Since it can be complicated, I created a git project with just a dockerfile where you can use flask, nginx, uwsgi and tesseract. So there's no need to use both containers.
docker-flask-nginx-uwsgi-tesseract
| {
"pile_set_name": "StackExchange"
} |
Q:
How to change the titlebar's bgimage scale
I have a certain picture I want to turn into my titlebar background, but I don't know how to make it strech so it fits the entire titlebar. Any advice?
A:
Here is a patch to the default config which makes titlebars have a background from a file. The file is scaled to the exact size of the titlebar.
diff --git a/awesomerc.lua b/awesomerc.lua
index fa584b8a8..3e3a54c0d 100644
--- a/awesomerc.lua
+++ b/awesomerc.lua
@@ -542,6 +542,14 @@ client.connect_signal("manage", function (c)
end
end)
+local tb_bg_image = gears.surface("/tmp/variant_outside.png")
+local bg_width, bg_height = gears.surface.get_size(tb_bg_image)
+local function bg_image_function(_, cr, width, height)
+ cr:scale(width / bg_width, height / bg_height)
+ cr:set_source_surface(tb_bg_image)
+ cr:paint()
+end
+
-- @DOC_TITLEBARS@
-- Add a titlebar if titlebars_enabled is set to true in the rules.
client.connect_signal("request::titlebars", function(c)
@@ -557,7 +565,8 @@ client.connect_signal("request::titlebars", function(c)
end)
)
- awful.titlebar(c) : setup {
+ local args = { bgimage_normal = bg_image_function, bgimage_focus = bg_image_function }
+ awful.titlebar(c, args) : setup {
{ -- Left
awful.titlebar.widget.iconwidget(c),
buttons = buttons,
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript constructor - effects of returning JSON object
Are there any performance or functional differences between having a javascript constructor return a JavaScript object literal, versus simply setting properties with this.XYZ. For example:
function PersonA(fname, lname) {
this.fname = fname;
this.lname = lname;
}
function PersonB(fname, lname) {
return {
"fname": fname,
"lname": lname
};
}
Both seem to behave appropriately:
PersonA.prototype.fullName = function() { return this.fname + " " + this.lname; };
PersonB.prototype.fullName = function() { return this.fname + " " + this.lname; };
var pA = new PersonA("Bob", "Smith");
var pB = new PersonB("James", "Smith");
alert(pA.fullName());
alert(pB.fullName());
Is one preferable for any reason, or is it a matter of taste? If taste, is one more standard?
A:
They're not entirely identical.
If you return the object being created from the constructor...
it will inherit from the prototype of the constructor
it will have instanceof available as a means of testing which constructor created it
The reason the fullName() method seems to work for pB is that you're using the PersonA constructor for both.
var pA = new PersonA("Bob", "Smith"); // uses PersonA constructor
var pB = new PersonA("James", "Smith"); // uses PersonA constructor???
FYI, the proper term is "JavaScript object literal", not "JSON object literal".
EDIT: You've updated the code in the question to use the PersonB constructor. Run it again, and you'll find an Error in the console.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to avoid that anytime() "updates by reference"?
I want to convert a numeric variable to POSIXct using anytime. My issue is that anytime(<numeric>) converts the input variable as well - I want to keep it.
Simple example:
library(anytime)
t_num <- 1529734500
anytime(t_num)
# [1] "2018-06-23 08:15:00 CEST"
t_num
# [1] "2018-06-23 08:15:00 CEST"
This differs from the 'non-update by reference' behaviour of as.POSIXct in base R:
t_num <- 1529734500
as.POSIXct(t_num, origin = "1970-01-01")
# [1] "2018-06-23 08:15:00 CEST"
t_num
# 1529734500
Similarly, anydate(<numeric>) also updates by reference:
d_num <- 17707
anydate(d_num)
# [1] "2018-06-25"
d_num
# [1] "2018-06-25"
I can't find an explicit description of this behaviour in ?anytime. I could use as.POSIXct as above, but does anyone know how to handle this within anytime?
A:
anytime author here: this is standard R and Rcpp and passing-by-SEXP behaviour: you cannot protect a SEXP being passed from being changed.
The view that anytime takes is that you are asking for an input to be converted to a POSIXct as that is what anytime does: from char, from int, from factor, from anything. As a POSIXct really is a numeric value (plus a S3 class attribute) this is what you are getting.
If you do not want this (counter to the design of anytime) you can do what @Moody_Mudskipper and @PKumar showed: used a temporary expression (or variable).
(I also think the data.table example is a little unfair as data.table -- just like Rcpp -- is very explicit about taking references where it can. So of course it refers back to the original variable. There are idioms for deep copy if you need them.)
Lastly, an obvious trick is to use format if you just want different display:
R> d <- data.frame(t_num=1529734500)
R> d[1, "posixct"] <- format(anytime::anytime(d[1, "t_num"]))
R> d
t_num posixct
1 1529734500 2018-06-23 01:15:00
R>
That would work the same way in data.table, of course, as the string representation is a type change. Ditto for IDate / ITime.
Edit: And the development version in the Github repo has had functionality to preserve the incoming argument since June 2017. So the next CRAN version, whenever I will push it, will have it too.
A:
You could hack it like this:
library(anytime)
t_num <- 1529734500
anytime(t_num+0)
# POSIXct[1:1], format: "2018-06-23 08:15:00"
t_num
# [1] 1529734500
Note that an integer input will be treated differently:
t_int <- 1529734500L
anytime(t_int)
# POSIXct[1:1], format: "2018-06-23 08:15:00"
t_int
# [1] 1529734500
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get max valued xref's 'rid' except from particular section
Please suggest for how get the maximum 'rid' value from all xrefs except from the 'Online' sections. By identify the max valued 'rid', then need to insert the attribute to those references which are higher to maximum value. Please see required result text.
XML:
<article>
<body>
<sec><title>Sections</title>
<p>The test <xref rid="b1">1</xref>, <xref rid="b2">2</xref>, <xref rid="b3 b4 b5">3-5</xref></p></sec>
<sec><title>Online</title><!--This section's xrefs no need to consider-->
<p>The test <xref rid="b6">6</xref></p>
<sec><title>Other</title>
<p><xref rid="b1">1</xref>, <xref rid="b7 b8">7-8</xref></p>
</sec>
</sec><!--This section's xrefs no need to consider-->
<sec>
<p>Final test test</p>
<sec><title>Third title</title><p>Last text</p></sec>
</sec>
</body>
<bm>
<ref id="b1">The ref01</ref>
<ref id="b2">The ref02</ref>
<ref id="b3">The ref03</ref>
<ref id="b4">The ref04</ref>
<ref id="b5">The ref05</ref>
<ref id="b6">The ref06</ref>
<ref id="b7">The ref07</ref>
<ref id="b8">The ref08</ref>
</bm>
</article>
XSLT:
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0">
<xsl:variable name="var1"><!--Variable to get the all 'rid's except sec/title contains 'Online' -->
<xsl:for-each select="//xref[not(. is ancestor::sec[title[contains(., 'Online')]]/descendant-or-self)]/@rid">
<!--xsl:for-each select="//xref/@rid[not(contains(ancestor::sec/title, 'Online'))]"--><!--for this xpath, error is : "XPTY0004: A sequence of more than one item is not allowed as the first argument" -->
<!--xsl:for-each select="//xref/@rid[not(contains(ancestor::sec[1]/title, 'Online')) and not(contains(ancestor::sec[2]/title, 'Online'))]"--><!--for this xpath we are getting the required result, but there may be several nesting of 'sec's -->
<xsl:choose>
<xsl:when test="contains(., ' ')">
<xsl:for-each select="tokenize(., ' ')">
<a><xsl:value-of select="."/></a>
</xsl:for-each>
</xsl:when>
<xsl:otherwise><a><xsl:value-of select="."/></a></xsl:otherwise>
</xsl:choose>
</xsl:for-each>
</xsl:variable>
<xsl:variable name="varMax1">
<xsl:for-each select="$var1/a">
<xsl:sort select="substring-after(., 'b')" order="descending" data-type="number"/>
<a><xsl:value-of select="."/></a>
</xsl:for-each>
</xsl:variable>
<xsl:variable name="varMax"><!--Variable to get max valued RID -->
<xsl:value-of select="substring-after($varMax1/a[1], 'b')"/>
</xsl:variable>
<xsl:template match="@*|node()">
<xsl:copy><xsl:apply-templates select="@*|node()"/></xsl:copy>
</xsl:template>
<xsl:template match="ref">
<xsl:variable name="varID"><xsl:value-of select="substring-after(@id, 'b')"/></xsl:variable>
<xsl:choose>
<xsl:when test="number($varMax) lt number($varID)">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<xsl:attribute name="MoveRef">yes</xsl:attribute>
<xsl:apply-templates select="node()"/>
</xsl:copy>
</xsl:when>
<xsl:otherwise>
<xsl:copy><xsl:apply-templates select="@*|node()"/></xsl:copy>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
Required result:
<article>
<body>
<sec><title>Sections</title>
<p>The test <xref rid="b1">1</xref>, <xref rid="b2">2</xref>, <xref rid="b3 b4 b5">3-5</xref></p></sec>
<sec><title>Online</title><!--This section's xrefs no need to consider-->
<p>The test <xref rid="b6">6</xref></p>
<sec><title>Other</title>
<p><xref rid="b1">1</xref>, <xref rid="b7">7</xref>, <xref rid="b8">8</xref></p>
</sec>
</sec><!--This section's xrefs no need to consider-->
<sec>
<p>Final test test</p>
<sec><title>Third title</title><p>Last text</p></sec>
</sec>
</body>
<bm>
<ref id="b1">The ref01</ref>
<ref id="b2">The ref02</ref>
<ref id="b3">The ref03</ref>
<ref id="b4">The ref04</ref>
<ref id="b5">The ref05</ref>
<ref id="b6" MoveRef="yes">The ref06</ref>
<ref id="b7" MoveRef="yes">The ref07</ref>
<ref id="b8" MoveRef="yes">The ref08</ref>
</bm>
</article>
Here consider number 5 for 'b5' rid, 6 for 'b6'.... (Because alphanumeric)
A:
Perhaps you can take a different approach rather than trying to find the maximum rid attribute that is not in an "online" section. Not least because it is not entirely clear what the maximum is when you are dealing with an alphanumeric string.
Instead, you could define a key to look up elements in the "online" section by their name
<xsl:key name="online" match="sec[title = 'Online']//*" use="name()" />
And then, another key, to look up the xref elements that occur in other sections
<xsl:key name="other" match="xref[not(ancestor::sec/title = 'Online')]" use="name()" />
Then, you can write a template to math the ref elements, and use an xsl:if to determine whether to add MoveRef attribute to it:
<xsl:variable name="id" select="@id" />
<xsl:if test="key('online', 'xref')[tokenize(@rid, ' ')[. = $id]] and not(key('other', 'xref')[tokenize(@rid, ' ')[. = $id]])">
Try this much shorter XSLT
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0">
<xsl:output method="xml" indent="yes" />
<xsl:key name="online" match="sec[title = 'Online']//*" use="name()" />
<xsl:key name="other" match="xref[not(ancestor::sec/title = 'Online')]" use="name()" />
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="ref">
<ref>
<xsl:variable name="id" select="@id" />
<xsl:if test="key('online', 'xref')[tokenize(@rid, ' ')[. = $id]] and not(key('other', 'xref')[tokenize(@rid, ' ')[. = $id]])">
<xsl:attribute name="MoveRef" select="'Yes'" />
</xsl:if>
<xsl:apply-templates select="@*|node()"/>
</ref>
</xsl:template>
</xsl:stylesheet>
You can actually amend the ref template to put the condition in the template match, if you wanted...
<xsl:template match="ref[key('online', 'xref')[tokenize(@rid, ' ')[. = current()/@id]] and not(key('other', 'xref')[tokenize(@rid, ' ')[. = current()/@id]])]">
<ref MoveRef="Yes">
<xsl:apply-templates select="@*|node()"/>
</ref>
</xsl:template>
| {
"pile_set_name": "StackExchange"
} |
Q:
Find max/min points for multivariable functions
I have a question about the general procedures to find the max/min points for multivariable functions, would really help if somebody could please clarify my doubts.
So for single variable function, it's pretty straightforward. We use the FOC to find critical points that satisfy f'(x)=0, and then use SOC to figure out whether it's a max or min by checking the sign of f''(x). i.e if f''(x)>0, the function is concave up, hence a minimum point, or if f''(x)<0, the function is concave down, hence a maximum point.
Now when extending this idea to multivariable functions, we'd still first use the FOC to find all critical points that may or may not be the actual max/min points; but when we subsequently use the SOC, what's to check here now? Do I simply check whether the Hessian of the matrix is positive/negative definite or do I check the sign of every second order derivative of f (Fxx, Fyy, Fxy, Fyx etc etc) at those critical points? Or are these two mechanisms effectively the same?
Thanks in advance!
A:
Checking the sign is definitely not the way to go.
$$ \left( \begin {matrix} 2 & 2 \\ 2 & 1 \end {matrix} \right)$$
is indefinite, although its entries all have the same sign. There are some shortcuts in two dimensions, but in general, you have to know whether the Hessian is definite or not. Given a critical point of a $C^2$ function, that the definiteness of the Hessian is a sufficient condition can be seen by employing the Cauchy form of the Taylor remainder
$$f(x_0 + h) - f(x_0) = \underbrace{ \langle \nabla{f}|_{x_0}, h \rangle }_{\text{This term is 0}} + \frac12 \langle H|_{x^*}h, h\rangle$$
where $H|_{x^*}$ is the Hessian evaluated at a point $x^*$ between $x_0$ and $x_0 + h$. Since $f$ is $C^2$, if the Hessian is definite at $x_0$, it will be definite at $x^*$ also, as long as $h$ is small enough.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ng-model does not update HTML
On ng-click I change model but html is same...
html
<p>Rezervisanih:<span>{{seatsInfo}}</span></p>
<div ng-click="change()">change</div>
js
$scope.seatsInfo = 20;
$scope.change = function(){
$scope.seatsInfo = 30;
console.log($scope.seatsInfo);
}
A:
In looking at the app that you posted, I see that the update() call is actually within an ng-repeat scope. So you will be impacting a different scope level.
Make use of the $rootScope to ensure that you are setting the value on the correct scope level.
| {
"pile_set_name": "StackExchange"
} |
Q:
Simple two way RMI SSL Connection, PKIX path building failed
I have a program, the program has two runnable files one called ServerImpl and the other called ClientImpl. My aim is to be able to have the ClientImpl connect to the ServerImpl using RMI with SSL and invoke a method. The server should also be able to call methods on the client using RMI and SSL via a callback. I can get the client to connect to the server using SSL and invoke methods on that server, however I can't get the Server to connect back to the client using RMI with SSL.
The program also has two directories that contain two different sets of cert, keystore, truststore files:
resources/client/
client_cert.cer
client_keystore.jks
client_truststore.jks
resources/server/
server_cert.cer
server_keystore.jks
server_truststore.jks
The ServerImpl.java file:
public class ServerImpl implements ServerInt {
private ClientInt client;
public static void main(String[] args) {
ServerImpl server = new ServerImpl();
server.bind();
}
public void bind() {
System.setProperty("java.rmi.server.hostname", "192.168.0.32");
System.setProperty("javax.net.ssl.keyStore", "./resources/server/server_keystore.jks");
System.setProperty("javax.net.ssl.keyStorePassword", "password");
RMIClientSocketFactory rmiClientSocketFactory = new SslRMIClientSocketFactory();
RMIServerSocketFactory rmiServerSocketFactory = new SslRMIServerSocketFactory();
try {
ServerInt si = (ServerInt) UnicastRemoteObject.exportObject(this, 0, rmiClientSocketFactory, rmiServerSocketFactory);
Registry reg = LocateRegistry.createRegistry(1099);
reg.rebind("server", si);
System.out.println("RMIServer is bound in registry");
} catch (RemoteException ex) {
Logger.getLogger(ServerImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
@Override
public void connect(ClientInt ci) throws RemoteException {
System.out.println("client connected");
}
}
The ClientImpl.java file:
public class ClientImpl implements ClientInt {
private ServerInt server;
public static void main(String[] args) {
ClientImpl client = new ClientImpl();
client.bind();
client.initConnection();
client.connectToServer();
}
public void initConnection() {
System.setProperty("javax.net.ssl.trustStore", "./resources/client/client_truststore.jks");
System.setProperty("javax.net.ssl.trustStorePassword", "password");
try {
Registry reg = LocateRegistry.getRegistry("192.168.0.32", 1099);
server = (ServerInt) reg.lookup("server");
} catch (RemoteException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
} catch (NotBoundException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
public void bind() {
System.setProperty("java.rmi.server.hostname", "192.168.0.32");
System.setProperty("javax.net.ssl.keyStore", "./resources/client/client_keystore.jks");
System.setProperty("javax.net.ssl.keyStorePassword", "password");
RMIClientSocketFactory rmiClientSocketFactory = new SslRMIClientSocketFactory();
RMIServerSocketFactory rmiServerSockeyFactory = new SslRMIServerSocketFactory();
try {
ClientInt ci = (ClientInt) UnicastRemoteObject.exportObject(this, 0, rmiClientSocketFactory, rmiServerSockeyFactory);
Registry reg = LocateRegistry.createRegistry(5001);
reg.rebind("client", ci);
System.out.println("RMIClient is bound in registry");
} catch (RemoteException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
public void connectToServer() {
try {
server.connect(this);
} catch (RemoteException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
@Override
public void sayHelloToClient(String helloText) throws RemoteException {
System.out.println(helloText);
}
}
I then run the ServerImpl.java file, and no problems it runs fine. Then I run the ClientImpl.java file and I get an error when I call the connectToServer method:
java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Can anyone shed some light on what the problem is here and how I might be able to solve it? Failing that, can anyone point me towards a good tutorial on having two RMI entities that use SSL to talk to each other? Thanks.
A:
Ok I've figured it out. The problem was that the respective client/server java files were still using the default Java TrustStore and not the custom Truststore files that I had defined in the original problem code. Here is the correct code in full for any one else who is looking for a simple demonstration of a Two-Way RMI Client-Server connection using SSL.
After creating a blank Java Project, Add in a 'resources' folder into the top level directory that has two sub directories 'client' and 'server'. Then generate two separate sets of certificates, keystores, and truststores and put them in the respective sub directories like so:
resources/client/
client_cert.cer
client_keystore.jks
client_truststore.jks
resources/server/
server_cert.cer
server_keystore.jks
server_truststore.jks
Then create an interface for the server called 'ServerInt':
public interface ServerInt extends Remote {
public void connect(ClientInt ci) throws RemoteException;
}
Another interface for the client called 'ClientInt':
public interface ClientInt extends Remote {
public void sayHelloToClient(String helloText) throws RemoteException;
}
Now create a new java class for the server called 'ServerImpl':
public class ServerImpl implements ServerInt {
public static void main(String[] args) {
ServerImpl server = new ServerImpl();
server.bind();
}
public void bind() {
// System.setProperty("javax.net.debug", "all");
System.setProperty("javax.net.ssl.trustStore", "./resources/server/server_truststore.jks");
System.setProperty("javax.net.ssl.trustStorePassword", "password");
System.setProperty("java.rmi.server.hostname", "192.168.0.32");
System.setProperty("javax.net.ssl.keyStore", "./resources/server/server_keystore.jks");
System.setProperty("javax.net.ssl.keyStorePassword", "password");
RMIClientSocketFactory rmiClientSocketFactory = new SslRMIClientSocketFactory();
RMIServerSocketFactory rmiServerSocketFactory = new SslRMIServerSocketFactory();
try {
// Uncomment this line...
//ServerInt si = (ServerInt) UnicastRemoteObject.exportObject(this, 0);
// and then comment out this line to turn off SSL (do the same in the ClientImpl.java file)
ServerInt si = (ServerInt) UnicastRemoteObject.exportObject(this, 0, rmiClientSocketFactory, rmiServerSocketFactory);
Registry reg = LocateRegistry.createRegistry(1099);
reg.rebind("server", si);
System.out.println("Server is bound in registry");
} catch (RemoteException ex) {
Logger.getLogger(ServerImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
@Override
public void connect(ClientInt ci) throws RemoteException {
System.out.println("Client is connected");
// Generate a really big block of text to send to the client, that way it will be easy to see in a packet
// capture tool like wireshark and verify that it is in fact encrypted.
String helloText = "";
for (int i = 0; i < 10000; i++) {
helloText += "A";
}
ci.sayHelloToClient(helloText);
}
}
Finally we need a class for the client called 'ClientImpl':
public class ClientImpl implements ClientInt {
private ServerInt server;
public static void main(String[] args) {
ClientImpl client = new ClientImpl();
client.bind();
client.initConnection();
client.connectToServer();
}
public void initConnection() {
try {
Registry reg = LocateRegistry.getRegistry("192.168.0.32", 1099);
server = (ServerInt) reg.lookup("server");
} catch (RemoteException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
} catch (NotBoundException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
public void bind() {
// System.setProperty("javax.net.debug", "all");
System.setProperty("javax.net.ssl.trustStore", "./resources/client/client_truststore.jks");
System.setProperty("javax.net.ssl.trustStorePassword", "password");
System.setProperty("java.rmi.server.hostname", "192.168.0.32");
System.setProperty("javax.net.ssl.keyStore", "./resources/client/client_keystore.jks");
System.setProperty("javax.net.ssl.keyStorePassword", "password");
RMIClientSocketFactory rmiClientSocketFactory = new SslRMIClientSocketFactory();
RMIServerSocketFactory rmiServerSockeyFactory = new SslRMIServerSocketFactory();
try {
// Uncomment this line...
// ClientInt ci = (ClientInt) UnicastRemoteObject.exportObject(this, 0);
// and comment out this line to turn off SSL (do the same in the ServerImpl.java file)
ClientInt ci = (ClientInt) UnicastRemoteObject.exportObject(this, 0, rmiClientSocketFactory, rmiServerSockeyFactory);
Registry reg = LocateRegistry.createRegistry(5001);
reg.rebind("client", ci);
System.out.println("Client is bound in registry");
} catch (RemoteException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
public void connectToServer() {
try {
server.connect(this);
} catch (RemoteException ex) {
Logger.getLogger(ClientImpl.class.getName()).log(Level.SEVERE, null, ex);
}
}
@Override
public void sayHelloToClient(String helloText) throws RemoteException {
System.out.println(helloText);
}
}
That's all there is to it. First run the 'ServerImpl' file and that will create the RMI server. Then run the 'ClientImpl' file and this will create it's own RMI Registry, then send itself to the server in the connectToServer method. The server will receive this message with the client RMI object and then use that instance of the client RMI object to call the clients methods. All while using SSL.
To verify that it's using SSL, the server generates a really long string of text and send this back to the client. By using a packet capture tool like Wireshark you can easily see this message is encrypted. I've included comments in the code that make it easy to turn off SSL so that you can see this text without encryption.
It took me longer than I care to admit to figure all this out, and at the same time I couldn't find any good tutorials on the subject. So hopefully if anyone else is stuck on this problem this will be of some help.
| {
"pile_set_name": "StackExchange"
} |
Q:
Reading a file and extracting data using a regex in PHP
I am trying to echo out the names/paths of the files that are written in logfile.txt. For that, I use a regex to match everything before the first ocurrence of : and output it. I am reading the logfile.txt line by line:
<?php
$logfile = fopen("logfile.txt", "r");
if ($logfile) {
while (($line = fgets($logfile)) !== false) {
if (preg_match_all("/[^:]*/", $line, $matched)) {
foreach ($matched as $val) {
foreach ($val as $read) {
echo '<pre>'. $read . '</pre>';
}
}
}
}
fclose($logfile);
} else {
die("Unable to open file.");
}
?>
However, I get the entire contents of the file instead. The desired output would be:
/home/user/public_html/an-ordinary-shell.php
/home/user/public_html/content/execution-after-redirect.html
/home/user/public_html/paypal-gateway.html
Here is the content of logfile.txt:
-------------------------------------------------------------------------------
/home/user/public_html/an-ordinary-shell.php: Php.Trojan.PCT4-1 FOUND
/home/user/public_html/content/execution-after-redirect.html: {LDB}VT-malware33.UNOFFICIAL FOUND
/home/user/public_html/paypal-gateway.html: Html.Exploit.CVE.2015_6073
Extra question: How do I skip reading the first two lines (namely the dashes and emtpy line)?
A:
Here you go:
<?php
# load it as a string
$data = @file("logfile.txt");
# data for this specific purpose
$data = <<< DATA
-------------------------------------------------------------------------------
/home/user/public_html/an-ordinary-shell.php: Php.Trojan.PCT4-1 FOUND
/home/user/public_html/content/execution-after-redirect.html: {LDB}VT-malware33.UNOFFICIAL FOUND
/home/user/public_html/paypal-gateway.html: Html.Exploit.CVE.2015_6073
DATA;
$regex = '~^(/[^:]+):~m';
# ^ - anchor it to the beginning
# / - a slash
# ([^:]+) capture at least anything NOT a colon
# turn on multiline mode with m
preg_match_all($regex, $data, $files);
print_r($files);
?>
It even skips both your lines, see a demo on ideone.com.
A:
preg_match_all returns all occurrences for the pattern. For the first line, it will return:
/home/user/public_html/an-ordinary-shell.php,an empty string, Php.Trojan.PCT4-1 FOUND
and an other empty string
that don't contain :.
to obtain a single result, use preg_match, but to do that using explode should suffice.
To skip lines you don't want, you can for example build a generator function that gives only the good lines. You can also use a stream filter.
| {
"pile_set_name": "StackExchange"
} |
Q:
"document.getElementById" is not working
I am programming a set of traffic lights using HTML and JavaScript and I have run into a problem. I created a button which should change a line of text on the website to be a number, however whenever it is clicked it does nothing. I assume this is a problem with the document.getElementById script as I don't think it could be anything else. Here's my code:
<p id="dummy">PLACEHOLDER</p>
<script>
var imgArray = ["img0", "img1", "img2", "img3"];
imgArray[0] = new Image(300, 150);
imgArray[0].src = "Assets/TrafficLightRedLight.jpg";
imgArray[1] = new Image(300, 150);
imgArray[1].src = "Assets/TrafficLightRedAmberLight.jpg";
imgArray[2] = new Image(300, 150);
imgArray[2].src = "Assets/TrafficLightAmberLight.jpg";
imgArray[3] = new Image(300, 150);
imgArray[3].src = "Assets/TrafficLightGreenLight.jpg";
var counter = 0;
function count(counter){
if (counter =! 3){
counter + 1;
}
else{
counter = 0;
}
document.getElementById("dummy").innerHTML = counter;
}
</script>
<button type="button"
onclick="count">Test_Function</button>
A:
Four problems:
You need to call count in your handler:
<button type="button" onclick="count()">Test_Function</button>
counter =! 3 should be counter != 3, otherwise you're setting it to false
counter + 1 doesn't do anything. Use counter += 1 or counter++
Don't use counter as a parameter - you'd only modify the local copy.
All the above fixes:
<p id="dummy">PLACEHOLDER</p>
<script>
var imgArray = ["img0", "img1", "img2", "img3"];
var counter = 0;
imgArray[0] = new Image(300, 150);
imgArray[0].src = "Assets/TrafficLightRedLight.jpg";
imgArray[1] = new Image(300, 150);
imgArray[1].src = "Assets/TrafficLightRedAmberLight.jpg";
imgArray[2] = new Image(300, 150);
imgArray[2].src = "Assets/TrafficLightAmberLight.jpg";
imgArray[3] = new Image(300, 150);
imgArray[3].src = "Assets/TrafficLightGreenLight.jpg";
function count() {
if (counter != 3) {
counter++;
} else {
counter = 0;
}
document.getElementById("dummy").innerHTML = counter;
}
</script>
<button type="button" onclick="count()">Test_Function</button>
| {
"pile_set_name": "StackExchange"
} |
Q:
The meaning of the word "good" in a sentence
If an article does what is claimed for it, it will be a valuable good.
This sentence is taken from the advertisements. I'd like to know what the meaning of "good" is. Is "good" correctly used in the sentence? Thanks a trillion.
A:
Good here refers to an object in the economic sense as in "goods and services"
possessions, especially movable effects or personal property.
[Source]
| {
"pile_set_name": "StackExchange"
} |
Q:
Border on the left side of the button only
Here is my button_style.xml which i am including with my button. However, I still can't seem to get the border on the left. Can anyone help me here please?
ps - My background should be transparent
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android" >
<layer-list>
<item android:left="2dp">
<shape android:shape="rectangle">
<stroke
android:width="1dp"
android:color="#999999"/>
</shape>
</item>
</layer-list>
</selector>
A:
Try this
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android" >
<item>
<shape android:shape="rectangle" >
<solid android:color="@android:color/transparent" />
</shape>
</item>
<item
android:bottom="-2dp"
android:right="-2dp"
android:top="-2dp">
<shape>
<solid android:color="@android:color/transparent" />
<stroke
android:width="2dp"
android:color="#FFF" />
</shape>
</item>
</layer-list>
A:
There is some inherit problem with the borders of the button. But I found the best way to do it. Let say if you want a border only on the left side of the button. In the MainActivity xml file where the button is placed do it like this -
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@drawable/newstyle"
android:orientation="vertical" >
<Button
android:id="@+id/button3"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@null"
android:gravity="center_vertical"
android:paddingLeft="10sp"
android:text="Button"
android:textAllCaps="false"
android:textColor="#939393"
android:textSize="20sp" />
</LinearLayout>
For the newstyle.xml file which will be located in the drawable use this code -
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android" >
<item
android:bottom="0sp"
android:left="-2sp"
android:right="-2sp"
android:top="-2sp">
<shape android:shape="rectangle" >
<solid android:color="#ffffff" />
<stroke
android:width="1sp"
android:color="#d6d6d6" />
</shape>
</item>
</layer-list>
So the whole idea is -
Keep the background of the button as @null and keep the button in the linera layout. Give the background to the liner layout and voila its done..:)..
| {
"pile_set_name": "StackExchange"
} |
Q:
H2 embedded mode and software crashes
As you all know H2 is a powerful pure Java DBMS with several features like server/client mode and embedded
When working on a little software with a H2 database ,I ran into a problem :
the software crashes and the connection remains open ,when restarting the software I cannot access the database again (It's in embedded mode so it's locked) and to bypass this problem I had to shutdown Java virtual machine manually using task manager
Is there a way in case such an event happens (application crash) and yet I can restore the connection normally ?
A:
When the JVM exists normally, H2 normally closes the database itself, if you haven't done it yourself explicitly.
In worst case scenarios, you might be able to use Thread#setDefaultUncaughtExceptionHandler to terminate the JVM safely and/or close the database
| {
"pile_set_name": "StackExchange"
} |
Q:
jquery DataTables buttons
I'm having trouble creating custom buttons and editing the buttons.dom.button properties. Here is the code that I'm using;
$(document).ready(function() {
function buildTable(tableName) {
return $('#'+tableName).DataTable( {
dom: 'ifrt',
paging: false,
lengthChange: true,
responsive: true,
columnDefs: [
{
"targets": [ 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 ],
"visible": false,
"searchable": false
},
{
"orderable": false,
"targets": [0, 3, 4, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
}
],
buttons: [
'excel',
{
extend: 'columnToggle',
columns: 0,
text: 'show/hide pics'
}
],
buttons: {
dom: {
button:{
tag: 'li'
}
}
}
});
}
var tablesMen = buildTable('menTable');
$('#menTable_wrapper').prepend('<div class="dropdown"><button class=btn btn-primary dropdown-toggle" type="button" data-toggle="dropdown">dropdown<span class="caret"></span></button><ul class="dropdown-menu"></ul></div>');
tablesMen.buttons().container().appendTo($('.dropdown-menu'));
The buttons get reset to default (excel, pdf, copy, etc.) when I add the
buttons: {
dom: {
button:{
tag: 'li'
}
}
}
I hope that makes sense.
A:
You have an array called "buttons" declared with
buttons: [ then immediately replace it with the object buttons: {
EDIT2: Here is your function where I modified it to include the dom attribute as well as added a custom button as an example:
function buildTable(tableName) {
return $('#' + tableName).DataTable({
dom: 'Bfrtip',
paging: false,
lengthChange: true,
responsive: true,
columnDefs: [{
"targets": [1, 2, 3],
"visible": true,
"searchable": false
}, {
"orderable": false,
"targets": [0, 4, 5]
}],
buttons: {
dom: {
button: {
tag: 'li'
}
},
buttons: [{
text: 'My button',
action: function(e, dt, node, config) {
alert('Button activated');
}
}, {
extend: 'excel',
text: 'Save current page',
exportOptions: {
modifier: {
page: 'current'
}
}
}]
}
});
}
EDIT: Note you are also missing quotes in the string, here is the corrected:
$('#menTable_wrapper').prepend('<div class="dropdown"><button class="btn btn-primary dropdown-toggle" type="button" data-toggle="dropdown">dropdown<span class="caret"></span></button><ul class="dropdown-menu"></ul></div>');
| {
"pile_set_name": "StackExchange"
} |
Q:
GoogleMap settings (addMarker, setMapType & CameraUpdate) not working in SupportMapFragment
My map starts as it's supposed to, but the settings for the marker, mapType and zoom never applies in my SupportMapFragment. When I launch the same code in another project, in MainActivity instead, everything works. How do I do to make it work in the SupportMapFragment?
MainActivity (working):
import android.app.Activity;
import android.os.Bundle;
import android.view.View;
import com.google.android.gms.maps.CameraUpdate;
import com.google.android.gms.maps.CameraUpdateFactory;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.MapFragment;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
public class MainActivity extends Activity {
private final LatLng BUTIKPLATS = new LatLng(57.873873, 11.974995);
private GoogleMap karta;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
karta = ((MapFragment) getFragmentManager().findFragmentById(R.id.karta)).getMap();
karta.addMarker(new MarkerOptions().position(BUTIKPLATS).title("Vita Fläckens Blommor"));
karta.setMapType(GoogleMap.MAP_TYPE_HYBRID);
CameraUpdate update = CameraUpdateFactory.newLatLngZoom(BUTIKPLATS, 17);
karta.animateCamera(update);
}
}
SupportMapFragment (Not working):
import android.os.Bundle;
import android.view.View;
import com.google.android.gms.maps.CameraUpdate;
import com.google.android.gms.maps.CameraUpdateFactory;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
public class Karta extends SupportMapFragment{
public static Karta newInstance() {
Karta f = new Karta();
return f;
}
private GoogleMap karta;
private final LatLng BUTIKPLATS = new LatLng(57.873873, 11.974995);
public void onCreate(View v, Bundle savedInstanceState){
super.onCreate(savedInstanceState);
System.out.println("onViewCreated1");
karta = ((SupportMapFragment)getFragmentManager().findFragmentById(R.id.kontaktVisaFragment)).getMap();
System.out.println("Karta initierad (.getMap)");
if (karta !=null){
System.out.println("Karta != null");
karta.addMarker(new MarkerOptions().position(BUTIKPLATS).title("Vita Fläckens Blommor"));
karta.setMapType(GoogleMap.MAP_TYPE_HYBRID);
CameraUpdate update = CameraUpdateFactory.newLatLngZoom(BUTIKPLATS, 17);
karta.animateCamera(update);
}
}
}
A:
onCreate() of SupportMapFragment is too soon to be trying to manipulate the GoogleMap, as it does not exist yet. You need to wait to a later point in time, when the GoogleMap object exists, or leave the code where it is in your activity.
onViewCreated() from your previous question should be a better choice than onCreate(). However, I had made things too complicated in my answer. Replace:
karta = ((SupportMapFragment)getFragmentManager().findFragmentById(R.id.kontaktVisaFragment)).getMap();
with:
karta=getMap();
| {
"pile_set_name": "StackExchange"
} |
Q:
freeing memory when all instances of a class are destroyed
I have a font class that will load glyphs from a .ttf to openGL textures using sdl. The class consists of individual textures for each glyph and will obviously render them in succession to create displayed text. I figure it's costly to keep loading the .ttf to an sdl surface and then, in turn, using the surface pixel data to generate an openGL texture. So, I have for my program, a KText class that has a member: static std::vector<Font*> OpenedFonts;
When a new item that has inherited KText attempts to open a font, I scan the vector for the opened glyphs and then just return a pointer to the opened glyphs and just use the already made textures for all instances of that text (size/name/color).
The code is
bool KText::LoadFont() {
_Font = CheckOpenedFonts(); //_Font is KFont* pointer
if(_Font == NULL) {
_Font = new KFont;
_Font->LoadFont();
}
if(_Font == NULL) return false;
return true;
}
Clearly, not every time will I open a new font, so in my destructor I wouldn't want to simply use delete _Font. For namely these two reasons: there is another KText object pointer to that same set of glyphs and it would be good to keep them in memory until the program terminates in case another object is created and attempts to use that font.
Is there a way to wait until all the instances of KFont have left scope?
Thank you!
The solution was:
class KText {
private:
static int TextCnt;
public:
KText();
~KText();
};
//Implementation
static int KText::TextCnt = 0;
KText::KText() {
TextCnt++;
}
KText::~KText() {
TextCnt--;
if(TextCnt < 1) {
OpenedFonts* t = FntPnter; //FntPnter is a head pointer to the linked list of fonts
while(t != NULL ) {
FntPnter = t->Next;
delete t;
t = FntPnter;
}
}
}
A:
You can create a static integer member in KFont Let's say static int KFontCnt;, which is the number of KFont objects. Then increase KFontCnt in the KFont constructor, decrease it in KFont destructor. When it reaches 0 in the destructor you can free all KFont instances.
| {
"pile_set_name": "StackExchange"
} |
Q:
C# DataGrid automatically write values to DB
I have a datagrid view which shows table data from a database
Environment:
- C# 4.0
- SQL Server 2005
- Visual studio 2010
What i want is: On a row to row basis automatically save changes. So when I edit the first row and then leave the row. It should save the changes to the database.
I've used this code:
In the From1 Load:
dataSet1.TblObject.RowChanged += rowUpdate;
The eventhandler:
private void rowUpdate(object sender, DataRowChangeEventArgs e)
{
DataRow dr = e.Row;
if (dr.RowState == DataRowState.Modified)
{
tblObjectTableAdapter.Update(dr);
}
}
When I've edited a row in the grid, tblObjectTableAdapter.Update(dr); is called the data is stored to the table. Which is good. But the event keeps on triggering as if it's called recursively. So the tblObjectTableAdapter.Update(dr); is causing a dataSet1.TblObject.RowChanged event. But I can't differentiate between the both of them. I'm guessing I'm using the wrong event to store my values but I can't find any event that sounds logical to me. What am I doing wrong?
I've made a dataset xsd:
I've picked the dataset as datasource in this gridview:
A:
Ok, I read your post a couple of times now, while I can't help u directly with your problem. I'll try anyway, What u want is that when i make a change, it doesn't update itself immediatly, but saves the information in a list and when u press a button save the updates?
Then You should make a List of Datarows, and each time a row is changed add that item to the list, this way the updates aren't made directly, but only when u press the button. (This is also way better for performance sake.)
So instead of the code
DataRow dr = e.Row;
if (dr.RowState == DataRowState.Modified)
{
tblObjectTableAdapter.Update(dr);
}
U should make
DataRow dr = e.Row;
if (dr.RowState == DataRowState.Modified)
{
DatarowLst.Add(dr);
}
ButtonSave_Click(object sender, eventargs e)
{
foreach (DataRow d in DatarowLst)
{
tblObjectTableAdapter.Update(d);
}
}
EDIT:
Ok sorry for misunderstanding u. I now understand your problem and looked into the issue.
I found some interesting information on Data adapters and tables. It seems the Adapter does a row to row check in order to obtain information about the changes made.(i.e does it need a update or delete command?) Thereby, My GUESS is that this triggers a update to occur and therefore call your event again, and running the method recursivly.
Info found at the REMARK piece on: This site from msdn
They give a link to this site for more information. Maybe you'll find your answer there.
As to a solution for your problem it seems u can leave your event out and just call the update command. Im NOT sure about this cause I dont have time to check on this and I have little experience on this subject.
Well, hope this helps u a bit further..
| {
"pile_set_name": "StackExchange"
} |
Q:
Help deciphering IMAP traffic
I have configured Windows Live Mail for my Gmail account. It uses IMAP for incoming and SMTP for outgoing mail. I collected packets recvd/sent by the application over a period of 4 hours using Netmon. I observed that the server every now or then sends a TLS packet of TCP payload length 39. I decrypted the packet and it contains:
* 554 EXISTS
Can anyone tell me what is going on?
A:
I found the answer: The server is basically saying it has 554 messages. Each message is assigned a unique identifier. I have told Live Mail to check for new mail every 5 minutes and I indeed see see this packet every 5 minutes.
| {
"pile_set_name": "StackExchange"
} |
Q:
VS2010 IntelliSense inside text/x-jquery-tmpl script templates
I've been using jQuery templates which I absolutely love to use. The only drawback from an IDE standpoint is the lack of HTML IntelliSense inside the script tag. Is there a way to fool VS2010 so that markup inside template script tags get IntelliSense and syntax highlighting?
A:
I created a helper method for ASP.NET MVC 3 that works like this, inspired by Html.BeginForm:
within the view:
@using (Html.BeginHtmlTemplate("templateId"))
{
<div>enter template here</div>
}
Anything within the @using scope will be syntax highlighted.
The code for the helper:
public static class HtmlHelperExtensions
{
public static ScriptTag BeginHtmlTemplate(this HtmlHelper helper,string id)
{
return new ScriptTag(helper,"text/html", id);
}
}
public class ScriptTag : IDisposable
{
private readonly TextWriter writer;
private readonly TagBuilder builder;
public ScriptTag(HtmlHelper helper, string type, string id)
{
this.writer = helper.ViewContext.Writer;
this.builder = new TagBuilder("script");
this.builder.MergeAttribute("type", type);
this.builder.MergeAttribute("id", id);
writer.WriteLine(this.builder.ToString(TagRenderMode.StartTag));
}
public void Dispose()
{
writer.WriteLine(this.builder.ToString(TagRenderMode.EndTag));
}
}
A:
I'd solved this issue by creating simple custom serverside control (for ASP.NET 4 WebForms app.):
[ToolboxData("<{0}:JqueryTemplate runat=server></{0}:JqueryTemplate>")]
[PersistChildren(true)]
[ParseChildren(false)]
public class JqueryTemplate : Control {
private bool _useDefaultClientIdMode = true;
[DefaultValue(ClientIDMode.Static)]
[Category("Behavior")]
[Themeable(false)]
public override ClientIDMode ClientIDMode {
get {
return (_useDefaultClientIdMode) ? ClientIDMode.Static : base.ClientIDMode;
}
set {
base.ClientIDMode = value;
_useDefaultClientIdMode = false;
}
}
protected override void Render(HtmlTextWriter writer) {
writer.AddAttribute(HtmlTextWriterAttribute.Type, "text/x-jquery-tmpl");
writer.AddAttribute(HtmlTextWriterAttribute.Id, ClientID);
writer.RenderBeginTag(HtmlTextWriterTag.Script);
base.Render(writer);
writer.RenderEndTag();
}
}
and put each jquery template inside it (instead of <script> tag):
<some:JqueryTemplate runat="server" ID="myTemplateId">
... your template code goes here ...
</some:JqueryTemplate>
will rendered it in HTML as:
<script type="text/x-jquery-tmpl" id="myTemplateId">
... your template code goes here ...
</script>
A:
I don't think you can unless you trick the editor to not know you are writing a script tag, like writing it in code (HTML helper or so). I'd work around it by changing the tag name (or commenting it) temporarily and bringing it back after I'm done. Sure that's a silly solution.
One other thing you can do is instead of using script tag, use any other tag like div setting it's style to display:none. It should work exactly the same way (example in middle of this article).
| {
"pile_set_name": "StackExchange"
} |
Q:
Ways To Colour A Tetrahedral With 4 Different Colours
I've been working through a text on combinatorics, and came across this question:
Along with it's solution:
What I don’t understand about this solution is the case where the tetrahedron is painted with 4 different colours. I agree that all tetrahedra painted with 4 colours can be oriented so that the bottom is $R$ and $G$ faces towards you. However, there are 2 things I don’t understand:
I agree that in this orientation, with $R$ at the bottom and $G$ towards you, $BW$ and $WB$ look different. However, how can we be sure there is no way to orient these two so that they look the same? Can someone explain why this is the case to me? I find it difficult to prove to myself that there is no way to orient these two so that they look the same, because I’m not very good at visualizing stuff - I can’t visualize how either of these will look once they are rotated.
How can I be sure that these are the only 2 unique arrangements? What if I placed two shapes such that blue faces forward, and white faces towards the bottom, then find that there are a bunch of unique arrangements? How would I know these arrangements are the same arrangements as the two unique arrangements with red in front and green at the bottom? Again, could you explain this for a person who’s not very good at visualizing stuff?
Thanks in advance!
A:
$1.$ Since we have fixed $R$ at the bottom and $G$ towards us, any rotation/flipping will disrupt this arrangement of $R$ and $G$ relative to us. For example, consider the case $BW$. You can try to make this identical to the the $WB$ case, but notice that to get the $WB$ case we also need $R$ down and $G$ facing us. But then that was what we started with and we get the $BW$ back. So these two cases are surely different.
$2.$ We don’t care about the order of colours. The only thing that matters is we can take any such tetrahedron and orient it such that $R$ is facing downward and $G$ is facing towards us. And then there’s only two possibilities for the left and right faces: $BW$ or $WB$ which we have alreay accounted for. It’s safe to say there are only two arrangements using all four colors.
| {
"pile_set_name": "StackExchange"
} |
Q:
How does kinetic energy work in braking a vehicle?
Do the brakes have to do more work (ignoring air resistance) slowing a vehicle from $10\ \mathrm{m/s}$ to $8\ \mathrm{m/s}$ than from $8\ \mathrm{m/s}$ to $6\ \mathrm{m/s}$?
Say a $1000\ \mathrm{kg}$ vehicle is moving at $10\ \mathrm{m/s}$, it has a kinetic energy of
$$\frac12\times1000\ \mathrm{kg}\times(10\ \mathrm{m/s})^2=50\,000\ \mathrm J$$
Then the brakes are applied, and it slows to $8\ \mathrm{m/s}$, so now has a kinetic energy of
$$\frac12\times1000\ \mathrm{kg}\times(8\ \mathrm{m/s})^2=32\,000\ \mathrm J$$
The brakes are now applied again, and it slows to $6\ \mathrm{m/s}$, now the kinetic energy is
$$\frac12\times1000\ \mathrm{kg}\times(6\ \mathrm{m/s})^2=18\,000\ \mathrm J$$
So in the first braking instance, $50\,000\ \mathrm J - 32\,000\ \mathrm J = 18\,000\ \mathrm J$ of kinetic energy were converted into heat by the brakes.
In the second braking instance, $32\,000\ \mathrm J - 18\,000\ \mathrm J = 14\,000\ \mathrm J$ of kinetic energy was converted into heat by the brakes.
Doesn't seem intuitively right to me, I would imagine the work required from brakes would be equal to the amount velocity was reduced, regardless of the start velocity.
A:
The work is basically the amount of energy that is used to make something move. So first some math to gain insight how work works:
In the case of constant force work is defined as $$W=F s,$$ where $W$ is work, $F$ is the applied force and $s$ is the distance the object traveled in the direction of the force. The force is defined as $$F=m a,$$ where $m$ is the mass of the object and $a$ its acceleration. For constant force we have constant acceleration, which can be computed as $$a=\frac{v_2-v_1}{t},$$ where $v_2$ is the end velocity, $v_1$ is the starting velocity and $t$ is the time that passed during slowing down from $v_1$ to $v_2$. We also need the distance that the object traveled, which is: $$s=v_1 t +\frac{at^2}{2}=v_1 t +\frac{v_2-v_1}{2}t=\frac{v_2+v_1}{2}t,$$ where we plugged in our formula for acceleration. Now to put it all together we get:
$$W=m\frac{v_2-v_1}{t}\frac{v_2+v_1}{2}t=m\frac{v_2^2-v_1^2}{2}=E_2-E_1,$$ where $E_2$ is end kinetic energy and $E_1$ is starting kinetic energy of the object.
So why is this not proportional to velocity difference but to velocity squared distance? That is simply because the force applied is proportional to the velocity difference through the acceleration being proportional to the velocity difference. That makes sense doesn't it? To slow down your car your force need to be bigger the bigger the velocity difference is, if your are to take it the same amount of time.
But this force you need to multiply by the distance traveled and that distance depends on your initial velocity. The bigger your initial velocity, the bigger the distance you travel to slow down by the same amount of speed with the same acceleration, which seems pretty intuitive to me. So once you multiply the force, that is proportional to the velocity difference, by something that is bigger the bigger your initial velocity is, your resulting work must be bigger the bigger the initial velocity is, if your are to have the same velocity difference. Just as your computation suggests.
A:
It looks like you know how to work through the formulas, but your intuition isn't on board. So any answer that just explains why it follows from the formula for kinetic energy might not be satisfying.
Here is something that might help your intuition. For the moment, think about speeding things up rather than slowing them down, since the energy involved is the same. Have you ever helped someone get started riding a bike? Let's imagine they're just working on their balance, and not pedaling. When you start to push, it's easy enough to stay with them and push hard on their back. But as they get going faster, you have to work harder to keep the same amount of force at their back.
It's the same thing with pushing someone on a swing. When they're moving fast, you have to move your arm fast to apply as much force, and that involves more energy.
If that isn't helpful, consider a more physically precise approach. Suppose, instead of regular brakes, you have a weight on a pulley. The cable goes from the weight straight up over the pulley, straight back down to another pulley on the floor, and then horizontally to a hook that can snag your car's bumper. And just for safety, assume the weight is pre-accelerated so the hook matches the speed of the car as you snag it. Some mechanism tows the hook and then releases it just as it snags your car. Then all the force of the weight goes to slowing the car down.
If you snag the hook at 100 kph, that weight will exert the same force, and hence the same deceleration, as if you snag the hook at 10kph. The same deceleration means you slow down the same amount in the same time. But obviously the weight is going to go up a lot farther in one second if you're going 100 kph than if you're going 10 kph. That means it's going to gain that much more potential energy.
A:
Work is force times distance.
Assuming that your brakes apply the same force in each deceleration, it takes the same amount of time to go from 10m/s to 8m/s as it does to go from 8m/s to 6m/s. However, the vehicle is slower in the second deceleration, so it doesn't travel as far. As such, force is the same, but distance is smaller, and less work is done. Exactly what you expect from differencing the kinetic energies.
To see that traveled distance is actually important, just consider the ground that supports you. It constantly applies quite some force on you, but it does exactly zero work because it does not move up/down with you on top. A lift, however, needs to put in energy to get you to the top of a building: It pushes on you with the same force as the ground does, but it also moves upwards in the direction of the force, and thus transfers energy to you. The work done by the lift is exactly your gravitational force times the vertical distance you traveled.
| {
"pile_set_name": "StackExchange"
} |
Q:
Update many in mongoose
I have a very simple case. I want to update my collection every midnight.
Im using node-schedule:
schedule.scheduleJob('0 0 * * *', () => {
Users.updateMany();
});
All I want to do, is to loop over every document in my collection (Users) and then if User.created is false, I want to turn it into true.
In javascript it would be:
for (let user in Users) {
if (user.created === false) {
user.created = true;
}
}
How to do it in mongoose? Thanks!
Edit: The story is very simple, I just want to iterate over every element in my db using mongoose and if iterated element has field "created" === false, change it to true.
A:
You can use updateMany() methods of mongodb to update multiple document
Simple query is like this
db.collection.updateMany(filter, update, options)
For more doc of uppdateMany read here
As per your requirement the update code will be like this:
User.updateMany({"created": false}, {"$set":{"created": true}});
here you need to use $set because you just want to change created from true to false. For ref. If you want to change entire doc then you don't need to use $set
A:
You first need a query to find the documents you want to update. This is simply:
{"created": false}
Then you need an update query to tell mongo how to update those documents:
{"$set":{"created": true}}
You need to use the $set operator to specify which fields to change, otherwise it will overwrite the entire document. Finally you can combine these components into a single mongo call with an additional parameter to tell mongo we want to modify multiple documents:
User.update({"created": false}, {"$set":{"created": true}}, {"multi": true}, (err, writeResult) => {});
Mongoose tries to closely replicate the mongo API so all this information can be found solely within MongoDB's documentation: https://docs.mongodb.com/manual/reference/method/db.collection.update/
| {
"pile_set_name": "StackExchange"
} |
Q:
Calculating the floor of an element in C++ in O(logn)
So, I have to calculate the floor of a number in a sorted array in O(logn) where floor means, the greatest number in the array which is smaller than the given number.
For example, let the input array be {1, 2, 8, 10, 10, 12, 19}
For x = 0: floor doesn't exist in array
For x = 1: floor = 1
For x = 5: floor = 2
For x = 20: floor = 19
I made the following function:
int bina (int arr[], int low, int high, int num)
{
if ( arr[low] > num )
return -1;
else if (arr[high] <= num )
return high;
int mid = (low+high)/2;
if ( arr[mid] == num )
return mid;
// floor of num lies from low to (mid-1)
else if (arr[mid] > num)
{
if ( (mid-1) >= low && arr[mid-1] < num )
return mid-1;
else
bina(arr, low, mid-1, num);
}
// floor of num lies between (mid+1) to high
else if ( arr[mid] < num )
{
if ( (mid+1) <= high && arr[mid+1] > num )
return mid;
else
bina(arr, low, mid+1, num);
}
}
This is called in main by the call bina(arr,0,k-1,n). However, on putting these values (in above example), it always returns 1. I don't know what is the problem. I cross-checked my logic 3 times, and it seems to be correct. Can someone help me out? Thanks!
A:
The code in the final case looks suspicious.
Try changing:
bina(arr, low, mid+1, num);
to
return bina(arr, mid+1, high, num);
You may also need to add a return to the line bina(arr, low, mid-1, num); as well.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I use "Dress" for men's wear?
My friends laughed at me when I said "nice dress man" because "dress" is used for only females. Is this correct? I was always thinking dress is just synonym of costumes and clothes. Is there any gender specific usage of "dress"?
In India girls/woman use word "dress" to differentiate from the traditional dresses that women wear like saree, lahenga, chudidar, ghaghra etc., so anything other than Indian traditional clothing is called "dress" including Ts and Jeans.
A:
The meaning of "dress" is context-sensitive.
In "Nice dress, man!", the "nice" modifier makes it sound like you are referring to a specific item of clothing that you like. Therefore, I would assume that you are referring to a skirt-like garment. The same goes for "a dress", "the dress", "this dress", "your dress", or any modifier that makes it sound specific.
There are, however, situations where "dress" has a more generic meaning:
"Dress" used as an adjective is likely to be generic:
This company has a strict dress code: suits and ties for men, pantsuits for women.
"Dress" used as a verb is likely to be generic:
This party will be an opportunity to dress up.
It's taking you forever to get dressed!
"Dress" used as a noun requires thought:
The wedding invitation says formal dress requested, so I'll have to get my old suit altered.
A:
As a native U.S. English speaker, if I heard someone say, "Nice dress!", I would assume that they were referring to a dress, a specific item of clothing usually worn only by women and girls. To say that a male dresses well, you might say "Nice dresser!" instead, or "Nice clothes!", or maybe even "Nice outfit!".
Similarly, statements such as "He dresses nicely", or "He arrived in formal dress" could apply to both male and female.
A:
"Dress" can mean "clothing" or "attire", but it's usually used that way in the abstract, referring to clothes in general. It's not generally used to mean the specific outfit that a person is wearing.
I would take "nice dress, man!" to refer to a dress (singular noun) that's visible for you to remark on, not to dress (non-count noun) in general. A dress, singular, is a long skirted article of clothing that conventionally is worn by women and not by men (but, you know, it's the 21st century). An image search for "dress" should give you the general idea.
Just to confuse matters, a "dress shirt" is an article of men's clothing. In the US it means pretty much any smart collared shirt, in the UK it means a more specific style of formal shirt, which USians possibly refer to as a "tuxedo shirt". I'm unsure because I've never been to an event in the US that called for one. "Dress uniform" is the most formal military uniform, and I believe it can be called "dress" for short. The term "battledress" refers to standard combat uniforms, especially old-style ones. Not, as you might otherwise expect, to a kevlar frock.
So you might say, "nice dress shirt, man" or "nice dress [uniform], man", if the specific circumstances called for it, and that wouldn't imply women's clothing.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get month difference in jquery having only months and years
How to get the month difference in jquery while my input contains just months and years, but not day? My input is from :(4-2011) To :(5-2015) .
A:
You could do:
var a = "1-2014";
var b = "4-2015";
a = a.split('-');
b = b.split('-');
months = (b[1]-a[1]) * 12 + (b[0]-a[0]);
alert(months);
| {
"pile_set_name": "StackExchange"
} |
Q:
How's .NET multi-platform approach different than Java's back in the days?
TL;DR
How is the new .NET approach to being a multi platform framework
better than what Java did long ago?
What are the key differences in the implementation?
What are the advantages and disadvantages of each?
You can nowadays compile and run C# code "natively" in Linux and Mac. All of this because .NET is going multi platform. It was 2004 more or less when I started developing and by the time I heard about Java it wasn't as popular as it once was.
For what I understand Java used what was called an invisible virtual machine to run java code and, again for what I understand, .NET is being compiled directly for the target operating system. If that is correct I can see how .NET would outperform Java. Am I wrong here? Is there more to it than this?
A:
It's not.
.Net code compiles to IL which is run on a CLR
Its always been multi-platform in this sense and Mono has been around for a while now. http://www.mono-project.com/
I guess what you are referring to is the new .Net Standard and .Net Core projects. Essentially they are trying to slim down the key components of the .net framework in order to make cross platform CLRs easier to implement.
So you can now run .Net Core web sites running .Net Standard code on Linux boxes with a few clicks.
If there is a difference its probably that there doesn't seem to be a big push to get the WPF desktop app framework ported over yet. I think there is an acceptance that Java (apps) failed because although they would run on anything it always looked a bit pants.
Its hard to get that smooth feel on different platforms without writing specific presentation code for each. Which leaves you with two codes bases whatever you do.
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows 7 keeps losing network settings and being in an unidentified network every time I reboot
I recently changed my router to a Cisco 857 and had to re-setup my network. Everything looks great except that my main workstation loses its IP every time I reboot it.
It had added more networks when I made the changed in network sharing center with names like network 2, Local area connection #2 etc. and I wanted to clean it up so I renamed them and also merged networks on the screenshot below:
I am not really sure but I think this messed up everything. If I set up my static IPs all looks good but whenever I restart my nic changes to auto-negotiate and is in an unidentified network.
If I set my static IPs it remains an unidentified network but when I uninstall and reinstall nic's driver it successfully gets in "Home Network". I tried changing to a Work/Bussiness Network but I got same results.
I hope someone can point me out to the correct direction.
UPDATE
Sorry about reviving the topic but it seems that each time I clean install windows I get the same problem. I just did a double clean install of windows 10 and now, after a reboot, my PC boots to an unidentified network (with internet access).
Each time I manually switch to a static IP it seems it works fine but after a reboot I get an unidentified network. It seems weird to me that it takes it ~1 min to lose the yellow warning icon after booting. I think that it's something to do with my router. Do I have to clean the mac address table on it maybe? Do you have any suggestions?
A:
You might try to set a DHCP static binding in your Cisco router configuration.
Issue ipconfig /all command on Windows to locate your MAC Address.
Enter your router's configuration CLI and input the following commands:
Router(config)# ip dhcp pool YourPool
Router(dhcp-config)# host A.B.C.D M.A.S.K
Router(dhcp-config)# client-identifier 1234.1234.1234
You may replace A.B.C.D by your client's IP address, M.A.S.K by the subnet mask and 1234.1234.1234 by its MAC Address.
| {
"pile_set_name": "StackExchange"
} |
Q:
Weird SSH behaviour
I have 3 computers. All have SSH running and All have No password mode on, PKI only.
The first has a keyPair used for SSH access to the other two, call this machine A.
Therefore, both B and C have A's public key in their authorized_keys, and A can ssh to either without a password.
However, C doesn't have B's public key in it's authorized_keys file, yet, when I ssh from A -> B, and then SSH from B -> C, C let's the ssh session connect without a password.
Why is this happening?
Surely C should deny the session request from B, despite the fact that I'm SSH'd from A into B?
A:
You should look after ssh-agent and AllowAgentForwarding.
Your private key on A must be loaded in your local ssh-agent. And AllowAgentForwarding must be activated, so that the challenge generated on C to B is forwarded by B to A (in a chain of trust). And the ssh-agent on A is replying to the crypto challenge of C relayed by B which authentifies you on C and let you enter in it from B.
Disallow AllowAgentForwarding in sshd on B and C and it wont occur anymore.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python Add in new rows with new data based on Partial Match
Table 1
|Location|Type|Supplier| ID |Serial|
| MAB |Ant | A | A123 |456/56|
| MEB |Ant | B | A123 |456/56|
Table 2
|Location |Type|Supplier| ID |Serial|#####|
| MAB+MEB |Ant | A/B | A123 |456/56|123-4|
| MAB+MEB |Ant | A/B | A123/B123 |456/56|432-1|
| MAB+MEB |Ant | A/B | A123/B123 |456/56|432-1|
Table 3
|Location|Type|Supplier| ID |Serial|#####|
| MAB |Ant | A | A123 |456/56|123-4|
| MAB |Ant | A | A123 |456/56|432-1|
| MAB |Ant | A | A123 |456/56|432-1|
| MEB |Ant | B | A123 |456/56|123-4|
| MEB |Ant | B | A123 |456/56|432-1|
| MEB |Ant | B | A123 |456/56|432-1|
As illustrated above , if Table 1 column 'Location' , 'Supplier' , 'ID' , 'Serial' cell content is contained in the same column cells of Table 2 , to generate Table 3.
*Note that Table 1 is used as the core template, if there the relevant column cells are contained in Table 2 , we are merely replicating the rows in Table 1 and adding the '####' column to each of the rows.
Please advice how do we produce Table 3.
My logic: for a,b,c,d in table 1 , if a,b,c,d contained in table 2 , append 'Subcon Part #' from table 2 to table 1 by column, Concate all 'Subcon Part #' by ',' explode concated 'Subcon Part #' to generate rows with unique 'Subcon Part #'
Where a,b,c,d are the columns of interests , the links between Table 1 and 2
A:
Here is what I would suggest, first extracting the values from Table 2 and then merging this transformed DataFrame with table 1 on the variables of interest:
First, I reproduce your example:
import pandas as pd
import re
# reproducing table 1
df1 = pd.DataFrame({"Location": ["MAB", "MEB"],
"Type" : ["Ant", "Ant"],
"Supplier":["A","B"],
"ID": ["A123","A123"],
"Serial": ["456/56","456/56"]})
# then table 2
df = pd.DataFrame({"Location": ["MAB+MEB", "MAB+MEB", "MAB+MEB"],
"Type": ["Ant", "Ant", "Ant"],
"Supplier": ["A/B", "A/B","A/B"],
"ID": ["A123", "A123/B123", "A123/B123"],
"Serial":['456/56','456/56','456/56'],
"values_rand":[1,2,3]})
# First I split the column I am interested in based on regexp you can tweak according
# to what you want:
r = re.compile(r"[a-zA-Z0-9]+")
df['Supplier'], df["ID"], df["Location"] = df['Supplier'].str.findall(r),\
df['ID'].str.findall(r), \
df['Location'].str.findall(r)
table2 = pd.merge(df['Supplier'].explode().reset_index(),
df["ID"].explode().reset_index(),on="index", how="outer")
table2 = pd.merge(table2, df["Location"].explode().reset_index(),
on="index", how="outer")
table2 = pd.merge(table2, df.loc[:,["Type","Serial",
"values_rand"]].reset_index(), on="index",how="left")
result = (pd.merge(table2,df1, on=['Location' , 'Supplier' , 'ID' , 'Serial',"Type"])
.drop(columns="index"))
The result is
Supplier ID Location Type Serial values_rand
0 A A123 MAB Ant 456/56 1
1 A A123 MAB Ant 456/56 2
2 A A123 MAB Ant 456/56 3
3 B A123 MEB Ant 456/56 1
4 B A123 MEB Ant 456/56 2
5 B A123 MEB Ant 456/56 3
Hope it helps
| {
"pile_set_name": "StackExchange"
} |
Q:
how to convert date time if passing null value in date field
How to store null value and how to insert null values in database?
im getting this error.
String was not recognized as a valid DateTime.
if (taskObj.EstimatedTime == null)
{
taskObj.EstimatedTime = Convert.ToDateTime(estimateTimeLabel.Text);
}
public DateTime EstimatedTime
{
set { estimatedTime = value; }
get { return estimatedTime; }
}
A:
You need to specify in your database that you can allow null for the specific column.
If you already have a database you can run something like:
ALTER COLUMN `YourColumn` datetime2() DEFAULT NULL
This will allow you to use null in your database.
You also need to convert your datetime in code to a Nullable type. Otherwise .NET comes out with a default date 1/1/0001 12:00:00 AM (not very helpful).
There are two ways to do this. I personally recommend the first but it's purely down to coding style.
You can do this by either using the ? operator:
public DateTime? EstimatedTime
{
set { estimatedTime = value; }
get { return estimatedTime; }
}
Or by the Nullable generic function:
public Nullable<DateTime> EstimatedTime
{
set { estimatedTime = value; }
get { return estimatedTime; }
}
Don't forget to convert your estimatedTime field to a nullable type also (again either by using the ? operator or Nullable.
I hope that helps. If not then leave me a comment and I will do my best to assist you.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I create a moving average in SQL Server that resets for each unique group?
I have a list of contracts and I have a short query that will calculate a moving average over the last 30 entries but I would like it to reset for each contract. Here is what I have so far.
SELECT
contract,
tradedate,
settle,
AVG(settle) OVER (ORDER BY contract, tradedate ROWS between 29 PRECEDING and CURRENT ROW) AS MA30
FROM
Pricing.dbo.MasterReport$
The output looks like this:
contract tradedate settle MA30
----------------------------------------------
1RF18 2018-02-02 0.90277 0.95134
1RF19 2017-10-24 0.74563 0.943993214285714
I need the MA30 to reset for 1RF19 and start a new moving average. How can I do this?
A:
contract should be part of the PARTITION clause of the window function rather than to the ORDER BY clause:
SELECT
contract,
tradedate,
settle,
AVG(settle) OVER (
PARTITION BY contract
ORDER BY tradedate
ROWS BETWEEN 29 PRECEDING and CURRENT ROW
) AS MA30
FROM Pricing.dbo.MasterReport$
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I make an image fade into color upon hover?
I'm very new to web dev right now, and I'm currently trying to make an image fade into color upon hovering over it. This is what I've got right now:
html:
<body>
<img src=imgmonochrome.jpg id=img1>
</body>
css:
#img1 {
position: top right;
height:49%;
width:49%;
transition: content 0.5s ease;
}
#img1:hover {
transition: content 0.5s;
content: url('imgcolor.jpg');
}
The image will switch, but will not fade in.
I've looked all over for answers on this, but I can't find any that use just HTML and CSS (cause I'm illiterate in javascript/jQuery ((but going to learn very soon for this very reason)))
Help would be appreciated.
A:
YES, this is possible... But not in the traditional sense.
In order to accomplish this, you'll need to forgo <img />, and instead make use of two images presented with content: url() in :before and :after pseudo-classes. Set the :before to be your starting image, and :after to be your target image. Then set the opacity of :after to 0 by default, and set the two pseudo-elements to sit on top of one another. Finally, set a :hover rule for both :before and :after which toggles their opacity, and use transition: opacity to control the fade.
This can be seen in the following:
* {
margin: 0;
}
.image:before {
content: url("https://via.placeholder.com/150/FF0000/00FFFF");
transition: opacity 0.5s ease;
-webkit-transition: opacity 0.5s ease;
}
.image:after {
position: absolute;
left: 0;
opacity: 0;
content: url("https://via.placeholder.com/150/00FFFF/FF0000");
transition: opacity 0.5s ease;
-webkit-transition: opacity 0.5s ease;
}
.image:hover:after {
opacity: 1;
}
.image:hover:before {
opacity: 0;
}
<div class="image"></div>
| {
"pile_set_name": "StackExchange"
} |
Q:
puphpet how to configura to automatically clone a repository?
I'm starting to use puphpet, I want to use it to automatically clone a repository on provision but I'm not able to find a way, is this possible?
A:
One posibility that I found is create a .sh script on puphpet/files/exec-once-unprivileged/
The content of the script could be something like:
cd /vagrant
# Add github.com into known hosts to avoid interactive question
ssh -T [email protected] -o StrictHostKeyChecking=no
git clone ssh://[email protected]/repository
If you want to clone a private repository you can use ssh-agent to use the ssh keys of the host machine in the VM. To do this do on the host machine:
ssh-add ~/.ssh/id_rsa
| {
"pile_set_name": "StackExchange"
} |
Q:
sorted dict returns doesn't make sense / python
I have been debugging a program I have been trying to get running correctly, and it all comes down to the first line of code as below. I have a dictionary of keys (string) and values (integer). I am trying to sort these in place in ascending order so that I can get the smallest element. However the order of the returned values doesn't make sense. It is not returning the smallest first, in fact it returns the largest (even though documentation clearly says it should be in ascending order). It is not in alphabetical order either, even though the first string is AAAAA..AAA - but it doesn't follow an alphabetical order. I am not even sure how to access the elements of what is returned by sorted and what type it is. According to the errors I have received while experimenting, it is a "list". How can I solve this problem? This one line is making the computation all wrong.
kmerMin = (sorted(topkmerdict,key=lambda x: x[1]))
print kmerMin
print kmerMin[0]
print topkmerdict[kmerMin[0]]
A:
What you wrote returns a list of the dict's keys, sorted by the value of the 2nd character of the keys (x[1] picks out the 2nd character of key x).
It's not clear you want instead. If, e.g., you want the keys sorted by their associated values, then here's one way to do it:
>>> d = {"a": 12, "b": 6, "c": 3}
>>> sorted(d, key=lambda k: d[k])
['c', 'b', 'a']
Another way to do exactly the same, faster but perhaps less obvious:
>>> sorted(d, key=d.__getitem__)
['c', 'b', 'a']
| {
"pile_set_name": "StackExchange"
} |
Q:
How to set up a connection to a database for a program that will be distributed
I have a program that currently points to a database on my local drive. I will be placing the dataset on a network drive on a server and then distributing the program to other compters. How am I to set up a connection that will work on other compuers(where the network drive letter may be different)? I have tried seting this up through the App.Config file as well as using differnent Data Source configurations in the OleDbConnection. This is shortened version of my connection:
string strSQL = "INSERT INTO TestTable(Name1, Address) VALUES(@FirstName, @Address)";
OleDbConnection myConnection = new OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\TEMP\\TestDatabase.accdb");
OleDbCommand myCommand = new OleDbCommand(strSQL, myConnection);
myCommand.Parameters.AddWithValue("@FirstName", txtName.Text);
myCommand.Parameters.AddWithValue("@Address", txtAddress.Text);
try
{
myConnection.Open();
myCommand.ExecuteNonQuery();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
finally
{
myConnection.Close();
}
A:
Mike's comment above re: UNC paths makes good sense. Just use
Data Source=\\servername\sharename\path\to\data\file.accdb;
| {
"pile_set_name": "StackExchange"
} |
Q:
How to put semi transparent buttons over an image in layout?
What I am trying to do is put few buttons, made from PNG image, that has opaque border and semi transparent other area, over an image that will be controlled (zoomed, panned).
Something like that:
What is the best way to achieve this? What layout and what views should be used? Maybe there is similar tutorial to such app design.
A:
I guess I'd use a relative layout and skinned buttons. The only issue you'll have with skinned buttons is that you'll need to be sure you are using 32bit pngs (with a transparency layer) as your button skins.
Here's a code chunk for doing button skins - they get set as the background of the button object and you place them in the drawable folder along side your images.
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:state_pressed="true"
android:drawable="@drawable/btn_back_down" /> <!-- pressed -->
<!-- item android:state_focused="true"
android:drawable="@drawable/button_focused" /--> <!-- focused -->
<item android:drawable="@drawable/btn_back_up" /> <!-- default -->
</selector>
| {
"pile_set_name": "StackExchange"
} |
Q:
Matrix Processing in the Quadratic Sieve
I am working through the example in implementation of the quadratic sieve, and I have got stuck at the very last part: the matrix processing. In the example we must find the vector $S$ by left null space.
However if we transpose the matrix A and attempt to apply Gaussian Elimination to find the solution to S as in the example, the resulting solution does not match up with the answer in the example.
What am I missing here? Is the "mod 2" part left out of the null space method or applied later to the resulting solution?
How is the resulting $S$ vector then used in the next two equations we find:
A:
There might be a more instructive way to do this!
We want to find: $S = \left[ \begin{array}{ccc}
a & b & c \end{array} \right]$ given $A = \left[ \begin{array}{ccc}
0 & 0 & 0 & 1\\
1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 \end{array} \right]$
such that:
$$ S \cdot A \equiv \left[ \begin{array}{ccc}
0 & 0 & 0 & 0 \end{array} \right] \pmod 2$$
so, multiplying the $S$ and $A$, yields a $3x4$:
$$\tag 1 S \cdot A = \left[ \begin{array}{ccc}
0 & b & c\\
0 & b & c\\
0 & b & c\\
a & 0 & c
\end{array} \right] \equiv \left[ \begin{array}{ccc}
0 & 0 & 0 & 0 \end{array} \right] \pmod 2$$
From $(1)$, we get a set of modular equations (given that we have three duplicates):
$$ b + c \equiv 0 \pmod 2$$
$$ \tag 2 a + c \equiv 0 \pmod 2$$
So, solving the system in $(2)$ by whatever method you are comfortable with, we get
$$S = \left[ \begin{array}{ccc}
a & b & c \end{array} \right] = \left[ \begin{array}{ccc}
1 & 1 & 1 \end{array} \right]$$
This is, apparently, telling us that the product of all 3 equations yields a square$\pmod N$, meaning we use the table they calculated above and take the product of all three factors for both sides from $X + 124$ and $Y$ (table they calculated earlier).
So, forming the product of all three (since we have S with all $1's$), $Y's \equiv$ the product of the three $X + 24's$, yielding, $29 \cdot 782 \cdot 22678 = 22678^2$ and $124^2 \cdot 127^2 \cdot 195^2 = 3070860^2$, and then equating sides, we arrive at:
$$22678^2 \equiv 3070860^2 \pmod {15347}$$
Update:
I am not sure it is a good idea to rely on an example on the Wiki for such a complex algorithm. I just got home and looked in "Prime Numbers, A Computational Perspective", by Crandall and Pomerance. In it, they have spelled out the algorithm and all of the nuances!
You can also find additional examples in these papers on NFS:
NFS 1
NFS 2
Regards
| {
"pile_set_name": "StackExchange"
} |
Q:
Hora Keep tool change column name
I'm using Hora Keep Tool for SQL queries. When I use operations for example like
avg(age) to build the average value of a few values for the "group by" functionality, than it changes during the query in the output the original column name from "age" to "avg(age)".
I know that a string after the operation for example:
avg(age) age
would rename the column name, but I have a lot of rows, which I have to change than like this.
Is there a function which would not allow that during the query the column name would change from age to avg(age) ?
I would be happy if somenone could give me helpfully tips.
Thank you forward.
A:
You can use ' as ' keyword in sql to name a column e.g
select user_name as username from users;
this query will result the column name as username.
---------------
USERNAME
---------------
hello
hello1
hello3
| {
"pile_set_name": "StackExchange"
} |
Q:
Call url from javascript where the url is not in the same level as index.php
I'm trying to call a php file through ajax. No problem so far. But my file is in a folder called application/logic. the problem is that my domain is redirected to public/ where my index.php file is. Now my js files are also in that root public/js. Now how can I reach the php files under application/logic through javascript?
function add_DropDown(formid){
var arr = demountform(formid);
var myjsonarr = JSON.stringify(arr);
$.ajax({
url:'../application/logic/admin-functions.php',
type: 'post',
success: function(output) {
alert(output);
}
});
}
A:
You can not access files with an ajax request that are outside of the web root. If you can not access a file from browser address bar then you would not be able to access it from javascript too. A solution could be creating a php file in /public that will include or require /application/logic/admin-functions.php.
| {
"pile_set_name": "StackExchange"
} |
Q:
Connect to SSH server through SOCKS5 proxy
how to connect via ssh using shared key with socks in python?
I have tested several ways, any suggestions?
Thank you very much in advance ...
Imports:
import paramiko
from paramiko import RSAKey
import socket
import socks
import base64
Script:
def createSSHClient(server, port, user):
sock=socks.socksocket()
sock.set_proxy(
proxy_type=socks.SOCKS5,
addr='10.0.0.2',
port=1080,
username=base64.b64decode('dXNlcg==').decode("utf-8"),
password=base64.b64decode('cGFzc3dvcmQ=').decode("utf-8")
)
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/home/my-user/.ssh/id_rsa')
ssh.connect(server, port, user, privkey, sock)
return ssh
outputsocks = []
ssh = createSSHClient('192.168.1.23', 22, 'my-user')
outputsocks.append((ssh.exec_command('ls -ltr')[1]).read().decode('ascii'))
ssh.close()
print(outputsocks)
Output:
Traceback (most recent call last):
File "/home/my-user/teste2.py", line 26, in <module>
ssh = createSSHClient('192.168.1.23', 22, 'my-user')
File "/home/my-user/teste2.py", line 22, in createSSHClient
ssh.connect(server, port, user, privkey, sock)
File "/usr/local/lib/python3.6/site-packages/paramiko/client.py", line 349, in connect
retry_on_signal(lambda: sock.connect(addr))
File "/usr/local/lib/python3.6/site-packages/paramiko/util.py", line 283, in retry_on_signal
return function()
File "/usr/local/lib/python3.6/site-packages/paramiko/client.py", line 349, in <lambda>
retry_on_signal(lambda: sock.connect(addr))
TimeoutError: [Errno 110] Connection timed out
Note: The same script works if you remove the connection via socks and use a host that does not need a VPN, the socks credentials are correct and the shared key works.
A:
The error was in not opening the connection via sock before starting SSH, this was resolved with sock.connect()
More info: socket connect() vs bind()
Bug fix:
def createSSHClient(server, port, user):
sock=socks.socksocket()
sock.set_proxy(
proxy_type=socks.SOCKS5,
addr='10.0.0.2',
port=1080,
username=base64.b64decode('dXNlcg==').decode("utf-8"),
password=base64.b64decode('cGFzc3dvcmQ=').decode("utf-8")
)
sock.connect(('192.168.1.20',22))
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/home/my-user/.ssh/id_rsa')
ssh.connect(server, port, user, privkey, sock)
return ssh
outputsocks = []
ssh = createSSHClient('192.168.1.23', 22, 'my-user')
outputsocks.append((ssh.exec_command('ls -ltr')[1]).read().decode('ascii'))
ssh.close()
sock.close()
print(outputsocks)
| {
"pile_set_name": "StackExchange"
} |
Q:
Why can't I move std::ofstream?
Looking at previous answers on SO, it seems that while std::ostream is not be movable, std::ofstream should be. However, this code
#include <fstream>
int main()
{
std::ofstream ofs;
std::ofstream ofs2{std::move(ofs)};
}
does not seem to compile in any version of gcc or clang I tried (with either --std=c++11 or --std=c++14). The compiler error varies somewhat, but here's what I get for gcc 4.9.0
6 : error: use of deleted function 'std::basic_ofstream::basic_ofstream(const std::basic_ofstream&)'
Is this the expected behavior, according to the standard?
Note that a very similar question was asked before ( Is std::ofstream movable? ) but seems the standard has changed since then ( as detailed in
Why can't std::ostream be moved? ) rendering those answers outdated. Certainly, none of those answers explains why the code above does not compile.
Came across this issue when trying to use containers of ofstream, which does not work because of the above.
A:
According to the standard
27.9.1.11 basic_ofstream constructors
or, its more "readable" version http://en.cppreference.com/w/cpp/io/basic_ofstream/basic_ofstream , std::basic_ostream<> has a move constructor, so the code should compile.
clang++ 3.5 compiles it with -std=c++11 or -std=c++1y. Also gcc5 compiles it, so probably it is not implemented in libstdc++ for gcc < 5
Interestingly, the lack of move semantics is not mentioned on gcc's stdlibc++ implementation https://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.2014
See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54316 for a bug report, thanks to @BoBTFish for pointing out. It is confirmed that the issue was fixed in gcc5.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to install Puppet Enterprise in Redhat?
I am installing Puppet Enterprise in Master(redhat) and I have followed the below steps. Let me know how to solve this
while Executing below command
yum -y install puppet-enterprise-installer
Getting Error like below
No match for argument: puppet-enterprise-installer
Error: Unable to find a match: puppet-enterprise-installer
A:
The problems is that puppet-enterprise-installer isn't installed using yum.
You need to follow the instructions on https://puppet.com/docs/pe/latest/install_pe_getting_started.html, download the installation tarball, untar it and run puppet-entreprise-installer from there.
| {
"pile_set_name": "StackExchange"
} |
Q:
Sphinx exclude one Page from html_sidebars
I am using Sphinx to build user docs for an application. According to the documentation for build configuration file there is an html_sidebars setting with an example documentaion.
html_sidebars = {
'**': ['globaltoc.html', 'sourcelink.html', 'searchbox.html'],
'using/windows': ['windowssidebar.html', 'searchbox.html'],
}
I am looking to have all pages display the sidebar except one, I have only been able to achieve the inverse of that, where the sidebar appears on just one page and not the rest.
html_sidebars = {'index': ['localtoc.html']}
I know that its possible to use glob syntax and I've tried almost every variation of [!index] I can think of without success.
A:
You will need to have your expression match everything except the string 'index'. Unfortunately globbing is not that flexible. There are several ways of working around it.
explicitly list each file except index
rename the files that you want to have the sidebar with a prefix or suffix, and use a corresponding globbing pattern to match only those files
move the files into a subdirectory, and use a corresponding globbing pattern
try [!x], assuming no other file has "x" in its name (i, n, d, and e are too common)
possibly something else?
| {
"pile_set_name": "StackExchange"
} |
Q:
how to get the xml in php response
I am trying to get a response from a server with the following code:
<?php
$url = "http://pgtest.redserfinsa.com:2027/WebPubTransactor/TransactorWS?WSDL";
$post_string = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:web="http://webservices.serfinsa.sysdots.com/">
<soapenv:Header/>
<soapenv:Body>
<web:cardtransaction>
<!--Optional:-->
<security>{"comid":"comid","key":"$!@!@!@!@!@","comwrkstation":"comwrkstation"}</security>
<!--Optional:-->
<txn>MAN</txn>
<!--Optional:-->
<message>{"CLIENT":"9999994570392223"}
</message>
</web:cardtransaction>
</soapenv:Body>
</soapenv:Envelope>';
$post_data = array('xml' => $post_string);
$stream_options = array(
'http' => array(
'method' => 'POST',
'header' => 'Content-type: text/xml' . "\r\n",
'content' => http_build_query($post_data)));
$context = stream_context_create($stream_options);
$response = file_get_contents($url, null, $context);
?>
However, I'm still getting empty responses, does anyone have any idea?
A:
I don't know why your code returns empty but you can try using curl
CODE:
$url = "http://pgtest.redserfinsa.com:2027/WebPubTransactor/TransactorWS?WSDL";
$xml = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:web="http://webservices.serfinsa.sysdots.com/">
<soapenv:Header/>
<soapenv:Body>
<web:cardtransaction>
<security>{"comid":"comid","key":"$!@!@!@!@!@","comwrkstation":"comwrkstation"}</security>
<txn>MAN</txn>
<message>{"CLIENT":"9999994570392223"}
</message>
</web:cardtransaction>
</soapenv:Body>
</soapenv:Envelope>';
$headers = array(
"Content-type: text/xml",
"Content-length: " . strlen($xml),
"Connection: close",
);
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_TIMEOUT, 30);
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $xml);
curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);
$response = curl_exec($curl);
$error = curl_error($curl);
print_r($response);
print_r($error);
Result:
Recv failure: Connection reset by peer
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to access commandline arguments in Python 3.6; python launcher the issue?
I recently finally moved my default python to 3.6 from 2.7.
Oddly enough, I can't get sys.argv to work correctly.
Short program testargs.py:
#!python3
import sys
print("This is the name of the script: ", sys.argv[0])
print("Number of arguments: ", len(sys.argv))
print("The arguments are: " , str(sys.argv))
When I run it with arguments ("abcdefg") those arguments don't show up in sys.argv:
C:\test>testargs.py abcdefg
This is the name of the script: C:\test\testargs.py
Number of arguments: 1
The arguments are: ['C:\\test\\testargs.py']
I'm totally confused by this. Any ideas?
I'm using the Python launcher; is that an issue?
When I specifically invoke one or the other Python interpreter, it seems okay:
C:\test>c:\Python27\python.exe testargs.py abcdefg
('This is the name of the script: ', 'testargs.py')
('Number of arguments: ', 2)
('The arguments are: ', "['testargs.py', 'abcdefg']")
C:\test>c:\Python36\python.exe testargs.py abcdefg
This is the name of the script: testargs.py
Number of arguments: 2
The arguments are: ['testargs.py', 'abcdefg']
A:
This seems to be a long-standing difficult-to-reproduce bug in Python: https://bugs.python.org/issue7936 . The bug report includes a registry edit that reportedly addresses it but that I don't feel comfortable messing with; so I'll just live with it until I move completely off of Python 2.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to access and modify alternate data streams (ADS) with go
How can I access and modify metadata stored in NTFS alternate data streams with golang?
I already know about https://github.com/pkg/xattr for accessing metadata stored in extended attribures under linux. However, i need comparable functionality for NTFS.
A:
Alternate data streams can be accessed using ":StreamName" syntax when opening a file in NTFS.
ads, err := os.Open("C:\Path\To\My\File:MyHiddenStream")
if err != nil {
panic(err)
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Algorithm of calculate modulo after power
Below a piece of code to calculate the value of a^b%c,
int powermod(int a,int b,int c)
{
int ans = 1;
while(b)
{
if(b&1)
ans=(ans*a)%c;
a=(a*a)%c;
b=b>>1;
}
return ans;
}
I tried to understand the algorithm behind the code but couldn't make it.
Can someone help to explain this to me? How this works and does the algorithm behind it have a name?
A:
It use the binary thinking.
ab = ab1 ab2 ... abn when b1+...+bn=b
write b in binary, you will get ab0 a2*b1 a4*b2 ... a2nbn ,bi means the ith bit in binary b. It can only be 0 or 1.
Now we find out that we don't need to calculate a2n every time, as we can get it from a2n-1
Because of (ab)%c=(a%c)(b%c)%c, this algorithm do the mod operation when multiply.
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript Validation Function
I have a school assignment where I'm designing a web page to include form entry. One of the parameters of the assignment is to validate a field where a user enters a name.(must be non-blank, must begin with a capital letter, and must contain at least one vowel and one consonant). As I'm learning the HTML part and not the JavaScript part, I'm allowed to use anything I find for the JavaScript so long as I cite the source. Can anyone help with this? I'm vaguely familiar with it but not nearly enough to "get it" on my own. To be clear, I am not seeking assistance with html, only the JavaScript function that meets the name parameter:
(must be non-blank, must begin with a capital letter, and must contain at least one vowel and one consonant)
Thank you for your help!
A:
Try this, will return true if validation succeeds:
function validateName(name) {
var cap = /^[A-Z]{1}.+$/;
var con = /[bcdfghjklmnpqrstvwxyz]/i;
var vow = /[aeiou]/i;
if(cap.test(name) && con.test(name) && vow.test(name)) {
return true;
} else {
return false;
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Appium, Java & Testng - find the reason for a NullPointerException
I am getting a NullPointerException before Appium and the emulator launch. So trying to debug by having sysout lines in the code is not helping at all.
If anyone has some advice please send through as I am going crazy!
My dependencies:
<dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.9</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-io</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>io.appium</groupId>
<artifactId>java-client</artifactId>
<version>7.0.0</version>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>7.0.0-beta3</version>
</dependency>
<dependency>
<groupId>com.googlecode.json-simple</groupId>
<artifactId>json-simple</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-testng</artifactId>
<version>4.0.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>4.0.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-core</artifactId>
<version>4.0.0</version>
<scope>test</scope>
</dependency>
</dependencies>
My Hooks class:
package MA.test;
import io.appium.java_client.AppiumDriver;
import io.appium.java_client.MobileElement;
import io.appium.java_client.android.AndroidDriver;
import io.appium.java_client.remote.AndroidMobileCapabilityType;
import io.appium.java_client.remote.MobileCapabilityType;
import io.appium.java_client.service.local.AppiumDriverLocalService;
import io.appium.java_client.service.local.AppiumServiceBuilder;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Parameters;
import java.net.MalformedURLException;
import java.net.URL;
public class Hooks {
public AppiumDriver driver;
public AppiumDriverLocalService service;
@Parameters({"platformVersion", "emulatorNumber", "deviceName", "port"})
@BeforeTest(alwaysRun = true)
public void startAppiumServer(String platformVersion, String emulatorNumber, String deviceName, String port) throws InterruptedException, MalformedURLException {
System.out.println("\n ABC");
service = new AppiumServiceBuilder()
.usingPort(Integer.valueOf(port))
.build();
service.start();
DesiredCapabilities caps = new DesiredCapabilities();
caps.setCapability(MobileCapabilityType.PLATFORM_VERSION, platformVersion);
caps.setCapability(MobileCapabilityType.DEVICE_NAME, emulatorNumber);
caps.setCapability(AndroidMobileCapabilityType.AVD, deviceName);
caps.setCapability(MobileCapabilityType.PLATFORM_NAME, "Android");
caps.setCapability(MobileCapabilityType.APPLICATION_NAME, "Name");
caps.setCapability(MobileCapabilityType.APPIUM_VERSION, "1.14.0");
caps.setCapability(AndroidMobileCapabilityType.APP_ACTIVITY, "activity");
caps.setCapability(AndroidMobileCapabilityType.APP_PACKAGE, "package");
caps.setCapability(MobileCapabilityType.AUTOMATION_NAME, "UiAutomator2");
driver = new AndroidDriver<MobileElement>(new URL("http://0.0.0.0:" + port + "/wd/hub"), caps);
System.out.println("\n Appium server: " + service.getUrl());
Thread.sleep(2000);
}
@AfterTest
public void teardown() {
service.stop();
driver.quit();
driver.closeApp();
System.out.println("\n Test quit");
}
}
My TestRunner class:
package MA.steps;
import cucumber.api.CucumberOptions;
import cucumber.api.testng.AbstractTestNGCucumberTests;
@CucumberOptions(
plugin = {"pretty", "html:target/cucumber-reports"}
, monochrome = true
, features = "src/test/java/feature"
, tags = "@Login"
)
public class TestRunner extends AbstractTestNGCucumberTests {
}
My testng.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Android Parallel Execution" parallel="tests" thread-count="2" verbose="7">
<test name="Device1">
<parameter name="platformVersion" value="9.0"/>
<parameter name="emulatorNumber" value="emulator-5554"/>
<parameter name="deviceName" value="Android9_Nexus"/>
<parameter name="port" value="4723"/>
<classes>
<class name="MA.steps.TestRunner"/>
</classes>
</test>
</suite>
And I am receiving the following error:
java.lang.NullPointerException
at screens.myAccountOverviewScreen.logoutAccount(myAccountOverviewScreen.java:104)
Which points to the following line of code:
driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
This ^ is the first time driver is called in my tests, but what baffles me is that it is throwing a nullpointerexception without even launching appium, or starting the emulator.
A:
The problem lies in your test code.
You created a suite xml file to include only your TestRunner class. But the entire of the appium instantiation logic is stuck in the Hooks class which does it via the TestNG configuration annotation @BeforeTest and @AfterTest. But this class is neither included in your suite, nor your test class (TestRunner) extends it. So your configurations aren't getting invoked leading to a null value for your AppiumDriver object.
To fix the problem, you can do one of the following:
Edit your suite xml file and include Hooks class also in it. (or)
Build a TestNG listener which implements IInvokedMethodListener and move your driver instantiation into its beforeInvocation
| {
"pile_set_name": "StackExchange"
} |
Q:
Xcode8 ld- framework not found
After migrating to swift 3 I am having problem with HockeySDK pod. Few hours ago everything worked well but now I am unable to build it. I am getting Match-o linker error, that framework wasn't found. I am having the same issue, when I work with previous commits, where everything was ok. I have tried delete derived data and restart system but without any results.Thanks for answer!
A:
These steps may help:
Close Xcode
Delete DerivedData
Open Xcode, wait for indexing
Build the project
| {
"pile_set_name": "StackExchange"
} |
Q:
Best Practice to write ::usage for own package functions
I'm writing an own package with Mathematica, that I would like to make available for others. In order to do that (and for my own usage of the package), I would like to write ::usage-strings for all my public functions.
I would like to use a formated string for ::usage in order to refer to some mathematical background in the explanations, e.g. if one of the inputs is a matrix, the ::usage may contain $\mathbf{M}\in\mathbb Z^{d\times d}$ or something like that.
The way I'm generating these up to now is, to take a cell and change it's style to text, then write the text and copy it into the ::usage="..." cell of the package. On saving the package, that gets transformed into the classic commands, like in the following example:
DirichletKernel::usage = "DirichletKernel[\!\(\*StyleBox[\"mM\",\nFontWeight->\"Bold\"]\)]
provides a dirichlet kernel with respect to the regular integral matrix mM
in Fourier coefficients. The same options as for the function \!\(\*
StyleBox[\"deLaValleePoussinKernel\", \"Code\"]\)\!\(\*
StyleBox[\"[\", \"Code\"]\)\!\(\*StyleBox[\"]\", \"Code\"]\) apply.";
Where the last part at least seems a little messed up.
There are two questions arising:
1) Is there an easier way to write such a usage message? because if I want to change something, I usually have to write the whole text again, because the format is quite verbose. What workflow do you use to write ::usage messages?
2) Sometimes the copy&paste stuff generates errors (hmpf, if you want to demonstrate something, of course, it works, I'll edit this, if I can get the actual message), that occur in the message window. Since Mathematica 9 that already happens if the autocomplete-box occurs.
So what has to be taken into account copying formated cells (text with math formula and e.g. bold face and code parts) into strings - in order to not get messed up (erroneous) strings?
A:
First, I want to say that I don't like heavily formatted usages messages. A usage message should be a short description in a form of a simple ascii message, so that it can be viewed even without a front-end.
Nevertheless, let me try to give you a hint here. I would do the following:
write your usage messages in a separate package-notebook in the Mathematica front-end, where you can look at it as formatted text and not as string-expression.
store this notebook as package Usage.m side by side to your implementation package.
load this package in the Kernel/init.m of you package.
1. Package notebook
When you use the Mathematica front-end for editing, you can input any special box form without caring about the underlying, complicated string-expression.
2. Store the package
After you saved the package, it is stored on disk as
(* ::Package:: *)
BeginPackage["YourContext`"];
f::usage="f[x] calculates \!\(\*SubsuperscriptBox[\(\[Integral]\), \(min\),\
\(max\)]\)f[x]\[DifferentialD]x."
EndPackage[];
but you don't have to care about it, because 1. you never see this in e.g. the Wolfram Workbench because your implementation is in another file and 2. you edit this package always in the front-end.
3. Loading the usages
Just load this in the init.m. It's probably the best, when you look how it is done in the VectorFieldPlots package in the AddOns/Packages path. Their init.m looks like
(* initialization file for the vector field plots package VectorFieldPlots` *)
Get["VectorFieldPlots`Usage`"];
Get["VectorFieldPlots`VectorFieldPlots`"];
Get["VectorFieldPlots`VectorFieldPlots3D`"];
and the file structure is
A:
In Mathematica, usage messages are typically for conveying a short description of how to call the function. Your text that talks about the options would be better placed in package documentation in the Details section. You can use Wolfram Workbench to generate package documentation that shows up in the Documentation Center.
A:
My advice is to keep things like this as simple as possible. I have found that copying the official documentation is adequate and generates few (... no) error messages.
With v8, there appears to be a newer WRI style. Before this, usages were written
function::usage = "function[arguments, options] does ...";
An example of this form can be found in
ToFileName[{$AddOnsDirectory, AddOns, ExtraPackages, Utilities}, CleanSlate.m]
From v8 onwards, the style seems to have changed to
If[!ValueQ[function::usage],
function::usage = "function[arguments, options] does ..."
];
An example of this form can be found in
ToFileName[{$AddOnsDirectory, AddOns, Packages, ANOVA}, ANOVA.m]
I have found no practical difference between the two versions, but your kilometerage may vary.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to set up continuous integration for MS dynamics crm 2011?
We are just beginning development and implementation for dynamics crm 2011 on premises. Is it possible to implement automation for code check-in to promote code from development to test systems? It looks like this would involve export/import of unmanaged solutions containing the development code that was checked in. I have not been able to find APIs around this functionality.
If that is not possible, how close can you get? It looks like there are APIs to automate the uploading of web resources and plug-ins (e.g. webresourceutility in the sdk), but the web resources still need to be manually linked to the form they are to be used on (in the case of javascript etc). Has anyone made progress in automating parts of their CRM environments?
for reference, we're using vs 2010 & tfs 2010 using MSuild for current continuous integration.
A:
We have a few techniques that provides us a very solid CI structure.
Plugins
All our Plugins are CI Compiled on Check-In
All plugin code we write has self-registration details as part of the component.
We have written a tool which plays the Plugins to the database, uninstalling the old ones first based on the self-registration
details.
Solution
We have an unmanaged solution in a Customisation organisation which
is clean and contains no data. Development is conducted out of this
organisation. It has entities, forms, Jscript, Views, Icons, Roles,
etc.
This Customisation database has all the solutions we've imported from 3rd parties, and customisations are made into our solution which is the final import into a destination organisation.
The Solution is exported as managed and unmanaged and saved into
TFS
We store the JScript and SSRS RDLs in TFS and have a custom tool
which plays these into the customisation database before it is
exported.
We also have a SiteMap unmanaged Solution which is exported as unmanaged (to ensure we get a final resultant Sitemap we are after)
Deployment
We have a UI and Command Line driven tool which does the following :-
Targets a particular Organisation
Imports the Customisation managed solution into a selected environment. e.g. TEST. Additionally imports the unmanaged Sitemap.
Uninstalls the existing solution which was there (we update the solution.xml file giving it a name based on date/time when we import)
Installs/Uninstalls the Plugin Code
Installs any custom SQL scripts (for RDLs)
Re-enables Duplicate Detection Rules
Plays in certain meta-data we store under source control. e.g. Custom Report entity we built which has attachments and XML configuration.
It isn't entirely perfect, but via command line we refresh TEST and all the Developer PCs nightly. It takes about 1 hour to install and then uninstall the old solution per organisation.
A:
We use CI extensively for Dynamics CRM. For managing solutions, I would recommend using a "clean" Dynamics CRM implementation which will be the master for your solutions and also for your "domain data". See http://msdn.microsoft.com/en-us/library/microsoft.crm.sdk.messages.importsolutionrequest.aspx for importing solutions. Also check out - http://msdn.microsoft.com/en-us/library/hh547388.aspx
| {
"pile_set_name": "StackExchange"
} |
Q:
how variables are set when using multiple OR (||) operators in Jquery/Javascript?
I'm having trouble understanding the order in which an || is executed in Jquery/Javascript.
If I have this:
someVar = $el.attr("one") || options.two || "three";
it sets someVar to "three" when both $el.attr("one") and options.two are not defined.
I need to add another condition to this statement, like so:
someVar = $el.attr("one") || options.two || options.extra == "true" ? undefined : "three";
So this should say:
If neither '$el.attr("one")' or 'options.two' are defined, check if 'options.extra == true', if it's true, set to 'undefined', otherwise set to 'three'.
However, I'm always getting undefined even if I set $el.attr("one") and I don't understand why?
Can anyone tell me what is wrong in my logic?
Thanks!
A:
I think you must put some parenthesis:
someVar = $el.attr("one") || options.two || (options.extra == "true" ? undefined : "three");
in fact your own code is read as:
someVar = ($el.attr("one") || options.two || options.extra == "true") ? undefined : "three";
| {
"pile_set_name": "StackExchange"
} |
Q:
How to represent TCP port range in Java (16 bit)
A port range goes from 0 to 65536 as it is stored as a 16-bit unsigned integer.
In Java, a short which is 16-bit only goes up to 32,767. An integer would be fine, but the API expects an unsigned integer, so it must fit within 16 bits.
My first attempt was as follows:
public byte[] encode() {
final int MESSAGE_SIZE = 10;
Bytebuffer buffer = ByteBuffer.allocate(MESSAGE_SIZE);
buffer.putInt(someInt);
buffer.putShort(portValue);
buffer.putInt(someOtherInt);
return buffer.array();
}
But clearly, I cannot represent a port above 32,767 if I simply use a short.
My question is, how can I put a value that can be up to 65536 as a short into the buffer so that the receiving API can interpret it within 16 bits?
Thank you!
A:
You just use a normal int in your program to store the port. When you want to send the int on the wire as a 16-bit value you simply cast it to a short. This just discards the high-order 16-bits (which you weren't using anyway) and the low-order 16-bits are are left unchanged. Example:
public byte[] encode() {
final int MESSAGE_SIZE = 10;
int portValue = 54321;
Bytebuffer buffer = ByteBuffer.allocate(MESSAGE_SIZE);
buffer.putInt(someInt);
buffer.putShort((short) portValue);
buffer.putInt(someOtherInt);
return buffer.array();
}
From Narrowing Primitive Conversions:
A narrowing conversion of a signed integer to an integral type T
simply discards all but the n lowest order bits, where n is the number
of bits used to represent type T.
| {
"pile_set_name": "StackExchange"
} |
Q:
Delete row not working in joomla
I'm creating a component in joomla and I'm having some problems using the database, specifically to delete rows. The code below is what I'm using
$db = JFactory::getDbo();
$query = $db->getQuery(true);
// delete if this date already exists for this user
$conditions = array(
$db->quoteName('userid') . '='.$array['userid'],
$db->quoteName('date') . '='.$date
);
$query->delete($db->quoteName('#__timereport_schedule'));
$query->where($conditions);
$db->setQuery($query);
$result = $db->execute();
So what I'm trying to do here is delete the rows that match the given userid and date, fairly simple. However it ends up not affecting the database. I know the variables $array['userid'] and $date are correct because the same are used later in the same function to do a insert (it's supposed to delete the record if it exists and make a new one) and the insert works fine which means I end up with duplicate entries.
example row that was succesfully inserted:
(userid, date, starttime, endtime, id, enddate, leave, days)
VALUES
(456, '2013-01-01', '08:00:00', '16:00:00', 448, '2013-01-01', '3', '["Tue"]')
with:
$query = $db->getQuery(true);
$columns = array('userid', 'date', 'starttime', 'endtime', 'id', 'leave');
$values = array("'".$array['userid']."'", "'".$date."'", "'".$array['starttime']."'", "'".$array['endtime']."'", "'null'", "'".$array['leave']."'");
$query
->insert($db->quoteName('#__timereport_schedule'))
->columns($db->quoteName($columns))
->values(implode(',', $values));
$db->setQuery($query);
try {
$result = $db->execute();
} catch (Exception $e) {
return $e;
}
What am I missing? I followed the http://docs.joomla.org/Inserting,_Updating_and_Removing_data_using_JDatabase#Deleting_a_Record example to create this query.
A:
At the end, add echo $query;. This should print to the screen the SQL query that is actually being run. Post that so we can see the query, since this will help see what could be wrong.
Also, in many cases, Joomla redirects after saves to prevent a page refresh from resubmitting data. So it can be helpful to add an exit(); statement after the echo to actually see what it is echoing.
(Though as I type this, I'm guessing that the date needs to be quoted.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Как правильно обращаться к элементам во вложенных словарях?
Нужно, чтобы получилась надпись "Родителем этого человека является 'Туреццкоподданый', а его дедом 'Отец туреццкоподданого'"
family = {"Остап Бендер": {"Турецкоподанный": "Отец турецкоподанного"},
"Люк Скайуокер": "Дарт Вейдер",
"Солид Снейк": "Биг Босс"}
# Поиск отца человека и его деда по имени.
son = input("Введите имя человека: ")
if son in family:
print("\nРодителем человека по имени", son, "является", family[???], "а его дедом", family[???])
else:
print("Ошибка, такого человека нет в базе данных")
A:
>>> dad, grandpa = list(family[son].items())[0]
>>> "Родителем человека по имени {son!r} является {dad!r}, а его дедом {grandpa!r}".format(**vars())
"Родителем человека по имени 'Остап Бендер' является 'Турецкоподанный', а его дедом 'Отец турецкоподанного'"
Чтобы получить этот результат, можно открыть интерактивную консоль Питона (REPL) и поиграться со словарём:
>>> son = "Остап Бендер"
>>> family[son]
{'Турецкоподанный': 'Отец турецкоподанного'}
>>> family[son].items()
dict_items([('Турецкоподанный', 'Отец турецкоподанного')])
>>> list(family[son].items())
[('Турецкоподанный', 'Отец турецкоподанного')]
>>> list(family[son].items())[0]
('Турецкоподанный', 'Отец турецкоподанного')
| {
"pile_set_name": "StackExchange"
} |
Q:
Where do I put my cython files in a python distribution?
I write and maintain a Python library for quantum chemistry calculations called PyQuante. I have a fairly standard Python distribution with a setup.py file in the main directory, a subdirectory called "PyQuante" that holds all of the Python modules, and one called "Src" that contains source code for C extension modules.
I've been lucky enough to have some users donate code that uses Cython, which I hadn't used before, since I started PyQuante before either it or Pyrex existed. On my suggestion, they put the code into the Src subdirectory, since that's where all the C code went.
However, looking at the code that generates the extensions, I wonder whether I should have simply put the code in subdirectories of the Python branch instead. And thus my question is:
what are the best practices for the directory structure of python distributions with both Python and Cython source files?
Do you put the .pyx files in the same directory as the .py files?
Do you put them in in a subdirectory of the one that holds the .py files?
Do you put them in a child of the .py directory's parent?
Does the fact that I'm even asking this question betray my ignorance at distributing .pyx files? I'm sure there are many ways to make this work, and am mostly concerned with what has worked best for people.
Thanks for any help you can offer.
A:
Putting the .pyx files in the same directory as .py files makes the most sense to me. It's what the authors of scikit-learn have done and what I've done in my py-earth module. I guess I think of Cython modules as optimized replacements for Python modules. I will often begin by writing a package in pure Python, then replace some modules with Cython if I need better performance. Since I'm treating Cython modules as replacements for Python modules, it makes sense to me to keep them in the same place. It also works well for test builds using the --inplace argument.
| {
"pile_set_name": "StackExchange"
} |
Q:
jquery: hover button for controlling a div's scrollbar?
i'm having a div with lots of content and a vertical scrollbar. i'd like to replace the scrollbar with an up/down button in order to scroll the content on hovering the buttons.
any ideas how to do this? thanks
A:
Try scrollTop()
var timeoutId = 0;
function scrollIt(amount){
$('div').scrollTop($('div').scrollTop()+amount);
}
$('.down').mousedown(function() {
timeoutId = setTimeout(scrollIt(5), 1000);
}).bind('mouseup mouseleave', function() {
clearTimeout(timeoutId);
});
$('.up').mousedown(function() {
timeoutId = setTimeout(scrollIt(-5), 1000);
}).bind('mouseup mouseleave', function() {
clearTimeout(timeoutId);
});
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET MVC adds Validation for non-required fields
I have a model class "Country" with property "CultureId" that is NOT marked as required. Another class "CountryViewModel" holds the same property "CultureId".
When rendering the "Create" view, I noticed that validation data attributes were added to the "CultureId" textbox, although there were no data annotations added.
I am using
@Html.HiddenFor(mode => mode.CultureId)
What might be the cause for such a behavior?
Thanks
A:
I'm guessing your CultureId is an int. MVC automatically adds required tags to non-nullable value types.
To turn this off add
DataAnnotationsModelValidatorProvider.AddImplicitRequiredAttributeForValueTypes = false;
to Application_Start or make your int nullable
A:
There are couple of ways to handle this -
a). Make the property as Nullable like
public int? Age { get; set; }
b). Use the below in controller class -
ModelState["Age"].Errors.Clear();
c). Add to the startup- DataAnnotationsModelValidatorProvider.AddImplicitRequiredAttributeForValueTypes = false;
| {
"pile_set_name": "StackExchange"
} |
Q:
Regular expression help - what's wrong?
I would like to ask for help with my regex. I need to extract the very last part from each URL. I marked it as 'to_extract' within the example below.
I want to know what's wrong with the following regex when used with sed:
sed 's/^[ht|f]tp.*\///' file.txt
Sample content of file.txt:
http://a/b/c/to_extract
ftp://a/b/c/to_extract
...
I am getting only correct results for the ftp links, not for the http.
Thanks in advance for your explanation on this.
i.
A:
Change [ht|f] to (ht|f), that would give better results.
[abc] means "one character which is a, b or c".
[ht|f] means "one character which is h, t, | or f", not at all what you want.
On some versions of sed, you'll have to call it with the -r option so that extended regex can be used :
sed -r 's/^(ht|f)tp.*\///' file.txt
If you just want to extract the last part of the url and don't want anything else, you probably want
sed -rn 's/^(ht|f)tp.*\///p' file.txt
| {
"pile_set_name": "StackExchange"
} |
Q:
Dropzone autoprocess queue false not working if there is an error like maxfilesize,maxfileuploads etc
When I upload image of more size then permitted in dropzone autoprocessqueue does not works and shows process completed if I upload proper file then it works proper.Following is my code :
Dropzone.options.myAwesomeDropzone =
{
url: 'file-upload.php',
previewsContainer: ".dropzone-previews",
uploadMultiple: true,
parallelUploads: 100,
maxFiles: 100,
maxFilesize: 5,
addRemoveLinks: true,
autoProcessQueue: false,
acceptedFiles:'image/jpg,image/jpeg,image/png',
init: function()
{
thisDropzone = this;
this.on("queuecomplete", function (file) {
alert("all files uploaded successfully");
});
}
}
});
function process_queue()
{
if(thisDropzone.files.length > 0) {
thisDropzone.processQueue()
}
}
It triggers alert "all files uploaded successfully" if I upload big file or other file except image.
A:
this.on("queuecomplete", function (file) {
var size = thisDropzone.files[0].size/1000000;
if(thisDropzone.files[0].type== "image/jpeg" ||thisDropzone.files[0].type=="image/jpg" || thisDropzone.files[0].type=="image/png" && size<5)
listingSubmitted();
});
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I find the second order transfer function from this step response diagram?
I have the following diagram of a system's step response:
I'm having trouble understanding how to calculate the system's transfer function, given this diagram. Specifically, I don't understand how exactly I can calculate the natural frequency and damping ratio.
Nothing I've read on this has helped me get a clear picture of what I should do. Can anyone help me understand step-by-step how to think about this problem?
A:
From the step response plot, the peak overshoot, defined as
$$M_p = \frac{y_{peak}-y_{steady-state}}{y_{steady-state}}\approx\frac{1.25-0.92}{0.92}=0.3587$$
Also, the relationship between \$M_p\$ and damping ratio \$\zeta\$ (\$0\leq\zeta<1\$) is given by:
$$M_p=e^\frac{-\pi\zeta}{\sqrt{1-\zeta^2}}$$
Or, in terms of \$\zeta\$:
$$\zeta=\sqrt{\frac{\ln^2M_p}{\ln^2M_p+\pi^2}}$$
So, replacing that estimated \$M_p\$:
$$\zeta\approx0.31$$
Also, from the step response plot, the damped natural frequency is aprox. 0.5 Hz or \$\pi\$ rad/s. The relationship with the undamped natural frequency is:
$$\omega_n=\frac{\omega_d}{\sqrt{1-\zeta^2}}\approx3.3 rad/s$$
Finally, the gain \$G_{DC}=y_{steady-state}\approx0.92\$
A standard second order transfer function has the form:
$$H(s)=G_{DC}\frac{\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}$$
Putting the obtained values:
$$H(s)\approx\frac{10}{s^2+2s+11}$$
Compare the step response below with that supplied by you:
A:
Calculating the natural frequency and the damping ratio is actually pretty simple.
If you look at that diagram you see that the output oscillates around some constant value finally settling on it: the frequency of these oscillations is the damped frequency. To measure it from the diagram you should measure the distance between the points where the output crosses the settling value, that's half the period that is the inverse of the damped frequency. Looking at your graph I'd say that that distance is about one second, so the damped frequency should be approximately 0.5Hz, i.e. \$f_\mathrm{d}=0.5\$Hz. Let's just keep this in mind for now.
Now for the damping ratio. That's a bit trickier, the damping ratio measures how fast the oscillations decay, i.e. how fast the output settles. If the damping ratio is 0 they don't settle, if it's above unity you don't have oscillations but just a nice exponential. What you need to do is fit a curve on the output maxima, the curve being of the family \$Ae^{-\frac{t}{\tau}}+C\$. You need three points, you have three points, what you need is that tiny \$\tau\$. I don't really want to fit the curve so I'll make a wild guess and say that \$\tau=3s\$. Now let's recap:
$$
f_\mathrm{d}=0.5Hz\\
\tau=3s
$$
Since that's an under damped system the following holds:
$$
\omega_\mathrm{d} = \omega_0 \sqrt{1 - \zeta^2 }\\
\tau = \frac{1}{\omega_0\zeta}
$$
And you guessed it:
\$\omega_\mathrm{d}=2\pi f_\mathrm{d}\$
\$\omega_0\$ is the natural pulsation, so the natural frequency \$f_0=\frac{\omega_0}{2\pi}\$
\$\zeta\$ is the damping factor
Now just do the math and profit.
note the \$\tau=3\$s guess might be more right than you think.
| {
"pile_set_name": "StackExchange"
} |
Q:
CSS Rounded corners for table header with background-color
I'm trying to create a table with rounded top corners and a different background color for the header line. I succeeded in making both individually (super beginner in html/css) thanks to online ressources but I fail when it comes to have the two at the same time.
What I mean is (and you can see it in the fiddle below), I can round the corners just fine and have the design I want for my table except that the header background-color is still a perfect rectangle and thus is overflowing outside the rounded corners.
I tried adding the border-radius property in various places but none worked the way I intended. How can I make the corners rounded and having the thead background-color fitting nicely in it ?
table.std {
margin-top: 0.2cm;
width: 100%;
border: 0.03cm solid #8a8a8a;
border-spacing: 0;
border-radius: 15px 15px 0px 0px;
font-size: 10pt;
}
table.std thead {
text-align: left;
background-color: lightgray;
height: 25px;
}
table.std thead tr th:first-child {
padding-left: 0.25cm;
/* To align with section title */
border-bottom: 0.03cm solid #8a8a8a;
}
table.std tbody tr td:first-child {
padding-left: 0.25cm;
/* To align with section title */
width: 30%;
}
table.std tbody tr td {
border-bottom: 0.01cm dashed lightgray;
height: 20px;
}
<div>
<table class="std">
<thead>
<tr>
<th colspan=2>Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>ID</td>
<td>id1</td>
</tr>
<tr>
<td>Date</td>
<td>2019/12/19</td>
</tr>
<tr>
<td>foo</td>
<td>bar</td>
</tr>
<tr>
<td>lorem</td>
<td>ipsum</td>
</tr>
<tr>
<td>john</td>
<td>doe</td>
</tr>
</tbody>
</table>
</div>
https://jsfiddle.net/co7xb42n/
Thanks for the help
A:
Add the border-radius to th
table.std thead tr th:first-child {
padding-left: 0.25cm; /* To align with section title */
border-bottom: 0.03cm solid #8a8a8a;
border-radius: 15px 15px 0px 0px;
}
https://jsfiddle.net/53eshg64/
| {
"pile_set_name": "StackExchange"
} |
Q:
Construct python dict from DeepDiff result
I have a DeepDiff result which is obtained by comparing two JSON files. I have to construct a python dictionary from the deepdiff result as follows.
json1 = {"spark": {"ttl":3, "poll":34}}
json2 = {"spark": {"ttl":3, "poll":34, "toll":23}, "cion": 34}
deepdiffresult = {'dictionary_item_added': {"root['spark']['toll']", "root['cion']"}}
expecteddict = {"spark" : {"toll":23}, "cion":34}
How can this be achieved?
A:
There is probably a better way to do this. But you can parse the returned strings and chain together a new dictionary with the result you want.
json1 = {"spark": {"ttl":3, "poll":34}}
json2 = {"spark": {"ttl":3, "poll":34, "toll":23}, "cion": 34}
deepdiffresult = {'dictionary_item_added': {"root['spark']['toll']", "root['cion']"}}
added = deepdiffresult['dictionary_item_added']
def convert(s, j):
s = s.replace('root','')
s = s.replace('[','')
s = s.replace("'",'')
keys = s.split(']')[:-1]
d = {}
for k in reversed(keys):
if not d:
d[k] = None
else:
d = {k: d}
v = None
v_ref = d
for i, k in enumerate(keys, 1):
if not v:
v = j.get(k)
else:
v = v.get(k)
if i<len(keys):
v_ref = v_ref.get(k)
v_ref[k] = v
return d
added_dict = {}
for added_str in added:
added_dict.update(convert(added_str, json2))
added_dict
#returns:
{'cion': 34, 'spark': {'toll': 23}}
| {
"pile_set_name": "StackExchange"
} |
Q:
R - divide some columns of a data.frame and keeping the others
I have a large data.frame made of characters and numeric variables. I need to divide, let's say, column 6 to columns 4 5 6, and keeping 1 2 3 as they are.
A:
Your question is a bit vague (for instance, do you want element wise division?), but is this what you're looking for?
## set up some test data
data.org=data.frame(matrix(1:100,ncol=10))
## make a copy of the org. data
data=data.org
## perform your element by element division
data[,4] = data.org[,4]/data.org[,6]
data[,5] = data.org[,5]/data.org[,6]
data[,6] = data.org[,6]/data.org[,6]
## Or the entire operation can be done with one line by
data[,4:6] = data.org[,4:6] / data.org[,6]
| {
"pile_set_name": "StackExchange"
} |
Q:
Prove that the dihedral group $D_4$ can not be written as a direct product of two groups
I like to know why the dihedral group $D_4$ can't be written as a direct product of two groups. It is a school assignment that I've been trying to solve all day and now I'm more confused then ever, even thinking that the teacher might have missed writing out that he means normal subgroups.
On another thread it was stated (as the answer to this question) that the direct product of two abelian groups is again abelian. If we consider the direct product of abelian subgroups $H$,$K\in G$ where $HK=G$ (for all $g \in G$, $g=hk$ $h \in H$, $k \in K$.) I can't understand why this would imply $g=kh$? It is not stated anywhere that $H$,$K$ has to be normal! But if this implication is correct I do understand why $D_4$ (that is non-abelian) can't be written as a direct product of two groups. But if it's not, as I suspect, I need some help!
We know that all groups of order 4 and 2 are abelian, (since $4=p^2$), but only 4 of the subgroups of $D_4$ are normal:
Therefor I can easily show that $D_4$ can't be a direct product of normal subgroups:
The only normal subgroups of $D_4$ is the three subgroups of order 4, (index 2 theorem): $\{e,a^2,b,a^2b\}$, $\langle a\rangle$, $\{e,a^2,ab,a^3b\}$ and the center of $D_4=\{e,a^2\}$ We can see that these are not disjoint. So $D_4$ can't be a direct product of normal subgroups. The reason for this being that the center is non-trivial. But why can't $D_4$ be a direct product of any two groups?
If we write the elements of $D_4$ as generated by $a$ and $b$, $a^4=e$, $b^2=e$, $ba=a^3b$ why isn't $D_4=\langle b\rangle\langle a\rangle$ ? I calculated the products of the elements of these two groups according to the rule given above and ended up with $D_4$, and also $\langle a\rangle$, $\langle b\rangle$ is disjoint...? Why is this wrong? Very thankful for an answer!
A:
For $G$ to be the direct product of $H$ and $K$, by definition we must meet the following conditions:
$H$ and $K$ are normal subgroups of $G$;
$G=HK$; and
$H\cap K=\{e\}$.
Note that you do not take all three conditions in your first paragraph explicitly. The normality of $H$ and $K$ is implicit when you say that it is a direct product. This is what you are missing: the definition of "direct product" requires normality of the two subgroups.
In general, if $H$ and $K$ are normal and $H\cap K=\{e\}$, then $hk=kh$ for all $h\in H$ and $k\in K$: indeed, consider the element $hkh^{-1}k^{-1}$. Writing it as $(hkh^{-1})k$, it is a product of two elements of $K$ (by normality of $K$), so it lies in $K$. But writing it as $h(kh^{-1}k)$, then, by normality of $H$, it is a product of two elements of $H$, so it lies in $H$. Hence, $hkh^{-1}k^{-1}$ lies in $H\cap K=\{e\}$. So $hkh^{-1}k^{-1}=e$. Multiplying by $kh$ on the right, we get $hk=kh$.
In particular, if $H$ and $K$ are abelian, then $G$ is abelian: given $hk$ and $h'k'$ in $G$ (using the fact that $G=HK$), then
$$(hk)(h'k') = h(kh')k' = h(h'k)k' = (hh')(kk') = (h'h)(k'k) = h'(hk')k = h'(k'h)k = (h'k')(hk)$$
so $G$ is abelian.
You are correct, however, that if $G$ is a product (not a direct product, but just a product) of two abelian subgroups $H$ and $K$ (which only requires that $HK=G$), then one cannot conclude that $G$ itself is abelian. For example, consider the nonabelian group of order $27$ and exponent $3$, presented by
$$\Bigl\langle a,b,c \;\Bigm|\; a^3=b^3=c^3=e,\ ba=abc,\ ac=ca,\ ab=ba\Bigr\rangle.$$
Let $H= \langle a,c\rangle$ and $K=\langle b\rangle$. Then $H$ and $K$ are each abelian, and $G=HK$, but $G$ is not abelian.
So, if you are asking whether $D_4$ can be written as a direct product of two proper subgroups, you agree that it cannot, because "direct product" necessarily requires $H$ and $K$ to be normal in $G$, and that would necessarily make $G$ abelian, which it is not.
Now, a separate question is: can we write $G$ as a product, not necessarily direct, of two subgroups? This only requires $G=HK$, and it does not even require $H\cap K=\{e\}$.
In this case, the answer is "yes": you can. You can even do it with $H\cap K=\{e\}$. For example, writing
$$D_4 = \Bigl\langle r,s\;\Bigm|\; r^4 = s^2 = e,\ sr=r^3s\Bigr\rangle$$
then we can take $H=\langle e, r, r^2, r^3\rangle$, and $K=\{e,s\}$. Then $HK$ has:
$$|HK| = \frac{|H|\,|K|}{|H\cap K|} = \frac{4\times 2}{1} = 8$$
elements, hence $HK=D_4$.
In fact, $D_4$ is a semidirect product of $C_4$ by $C_2$, which is what I exhibit above; in order for $G$ to be an (internal) semidirect product of $H$ and $K$, we require $H$ and $K$ to be subgroups such that:
$H\triangleleft G$;
$G=HK$; and
$H\cap K=\{e\}$.
In particular, every expression of $G$ as a direct product of two subgroups is also an expression as a semidirect product, but not conversely; and every semidirect product is also an expression as a product, but not conversely.
| {
"pile_set_name": "StackExchange"
} |
Q:
Showing that $f(x)=e^{-x}$ is uniformly continuous on $[0,\infty)$
I am trying to show that $f(x)=e^{-x}$ is uniformly continuous on $[0,\infty)$ and not having much success. I'm attempting to use a modified version of the following result (which I found on Math Stack Exchange here: https://math.stackexchange.com/questions/172988/please-prove-uniform-continuity
), that if $f$ is continuous on $[a,\infty)$ and the limit of $f(x)=L$ as $x$ approaches infinity, then $f$ is uniformly continuous on $[a,\infty)$.
This is the work I've done so far. I really don't know if it's valid reasoning...
The first problem I'm encountering is that the book has not yet defined functional limits, so I can't really use the proof directly. We have, however defined sequential limits and the notion of a decreasing function. If I define a sequence $x_n=n-1$ for $n$$\in$$\mathbb{N}$, then as $n$ goes to infinity, $f(x_n)\rightarrow0$. For $\epsilon/2$, $\exists$N$\in$$\mathbb{N}$ s.th for all n $\geq$ N, |$f(x_n)|$ <$\epsilon/2$. Then, since $f(x)$ is a decreasing function, this inequality holds for all $x$$\geq$$x_N$.
Since $\mathcal{S}$=$[0,N+1]$ is closed and bounded, and hence a compact set in $\mathbb{R}$, and $f(x)=e^{-x}$ is continuous on $\mathcal{S}$, then by the Uniform Continuity Theorem, $f$ is uniformly continuous on $\mathcal{S}$.
So, if $x,y$ $\in$$\mathcal{S}$,$\forall$$\epsilon>0,$ $\exists$ $\delta>0$ s.th |$x-y$|<$\delta$ then |$f(x)-f(y)$|<$\epsilon$. Then (really not sure about this part), if $x, y$ $\in$$(N+1,\infty)$, for any $x$ and $y$ in this set, we have $|f(x)-f(y)|=|f(x)+(-f(y))|$$\leq|f(x)|+|f(y)|<\epsilon/2+\epsilon/2=\epsilon$. So $f$ is uniformly continuous on $(N+1,\infty)$. Provided this part is correct, I am having trouble with the case where $x$$\in[0,N+1]$ and $y$ $\in(N+1,\infty)$. Provided it's not correct, I'm having trouble on that part as well. Provided none of this is correct, well... I don't know.
Any help would be appreciated. Is this an appropriate way to approach the problem? Are there other ways?
A:
For any $x>0$ and $y>0$ we have
$$
\left\vert \mathrm{e}^{-x} - \mathrm{e}^{-y} \right\vert = \left\vert \mathrm{e}^{-\min(x,y)} \left(1 - \mathrm{e}^{-|y-x|}\right) \right\vert = \mathrm{e}^{-\min(x,y)} \left(1 - \mathrm{e}^{-|y-x|}\right) \leqslant \mathrm{e}^{-\min(x,y)} |y-x|
$$
Hence for all $x>0, y>0$ such that $|x-y| < \delta$
$$
\left\vert \mathrm{e}^{-x} - \mathrm{e}^{-y} \right\vert < \delta
$$
which verifies the direct definition of uniform continuity.
A:
Proof 1
$f'(x)$ is bounded on $[0,\infty)$: $|f'(x)|\leq M $ for some $M\geq0$
Lagrange mean value theorem
$$|f(x)-f(y)|\leq M |x-y|$$
$f$ is uniformly continuous on $[0,\infty)$
Proof 2
If you want to use the fact that the limit of $f(x)=L$ as x approaches infinity, then $f$ is uniformly continuous on $[a,\infty)$
1). $\lim f(x)=L $, so, $\forall \epsilon \gt0$, there is $T \gt 0$ such that
$$|f(x)-f(y)|\lt \epsilon \qquad x,y\gt T$$
2). $f$ is uniformly continuous on $[0,T+1]$, that is, there is $\delta_1 \gt 0$ such that $x,y\in [0,T+1] ,|x-y|\lt \delta_1$
$$|f(x)-f(y)|\lt \epsilon $$
3). Let $\delta=\min\{1,\delta_1\}$. If $x,y\geq0, |x-y|\lt \delta $, then $x,y\in [T,\infty) $ or $x,y\in [0,T+1]$. From 1) and 2), we have
$$|f(x)-f(y)|\lt \epsilon $$
so $f$ is uniformly continuous on $[0,\infty)$
| {
"pile_set_name": "StackExchange"
} |
Q:
Bootstrap 4 JavaScript not working Symfony4
I'm trying to use Bootstrap 4 in Symfony 4 using yarn package manager but somehow the JavaScript is not working. I have no errors in the console but when I try to trigger the navbar collapsed button I won't open the navbar.
This is my code:
app.js
var $ = require('jquery');
require("bootstrap/js/dist/");
base.html.twig
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>{% block title %}Welcome!{% endblock %}</title>
<link rel="stylesheet" href="{{ asset('build/css/app.css') }}">
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="#">CRM Fabriek</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item active">
<a class="nav-link" href="#">Home <span class="sr-only">(current)</span></a>
</li>
<li class="nav-item">
<a class="nav-link" href="#">Link</a>
</li>
</ul>
<form class="form-inline my-2 my-lg-0">
<input class="form-control mr-sm-2" type="search" placeholder="Search" aria-label="Search">
<button class="btn btn-outline-success my-2 my-sm-0" type="submit">Search</button>
</form>
</div>
</nav>
{% block body %}{% endblock %}
<script src="{{ asset('build/js/app.js') }}"></script>
</body>
</html>
I compiled the js to the build/js/app.js file by using yarn run encore dev
A:
Import Bootstrap’s JavaScript by adding this line to your app’s entry point (usually index.js or app.js):
import 'bootstrap';
or indicate the path completely
import 'bootstrap/dist/js/bootstrap';
or if you prefer require()
require('bootstrap/dist/js/bootstrap');
Alternatively, you may import plugins individually as needed:
import 'bootstrap/js/dist/util';
import 'bootstrap/js/dist/dropdown';
...
https://getbootstrap.com/docs/4.1/getting-started/webpack/#importing-javascript
In the webpack-encore documentation says that you must in your webpack.config.js file add a call .autoProvidejQuery() because Bootstrap expects jQuery to be available as a global variable.
// webpack.config.js
Encore
// ...
.autoProvidejQuery();
http://symfony.com/doc/current/frontend/encore/bootstrap.html#importing-bootstrap-javascript
| {
"pile_set_name": "StackExchange"
} |
Q:
Estimate the difference between $f$ and $p$ interpolating $f$
Suppose $p$ is the unique polynomial of degree $\leq 2$ that agrees with a function $f$ at points $a_1 < a_2 < a_3$. If the third derivative $f^{(3)}$ exists, and $x\in (a_1,a_3)$, then we can say that $f(x)-p(x) = \dfrac{f^{(3)}(c)}{3!}(x-a_1)(x-a_2)(x-a_3)$ for some $c\in (a_1, a_3)$ depending on $x$. A proof is to let $g(x):=(x-a_1)(x-a_2)(x-a_3)$, and consider the function
$$h_x:t\mapsto (f(x)-p(x))g(t)-(f(t)-p(t))g(x).$$
$h_x$ is zero at $t=a_1, a_2, a_3$, and $x$, so by applying Rolle's theorem several times, we can find a point $c$ such that $h_{x}^{(3)}(c)=0$, and I claim that calculating the derivative at that point gives us what we want.
This result is useful since it allows us to bound the error $f-p$ by knowing something about the third derivative of $f$.
A similar tactic is to take the unique polynomial $p$ of degree $\leq 2$ such that $f(a_1)=p(a_1)$, $f(a_2)=p(a_2)$, and $f'(a_1)=p'(a_1)$. Notice that we can reach a similar result, $f(x)-p(x) = \dfrac{f^{(3)}(c)}{3!}(x-a_1)^2(x-a_2)$, with the same Rolle's theorem argument.
What if we consider the unique polynomial of degree $\leq 2$ such that $f(a_1)=p(a_1)$, $f(a_2)=p(a_2)$, and $f'(a_3)=p'(a_3)$, where $a_1<a_3<a_2$? (Note that in general we are only free to choose $a_3$ so long as $a_3 \neq \frac12 (a_1+a_2)$.) Then the Rolle's theorem argument can break down, because we get two zeroes of $h_x'$ in $(a_1,a_2)$, but we're not sure if one of them the same point as $a_3$. We need three distinct zeros of $h_x'$ to get what we need from the Rolle's theorem conclusion.
Is there a modification we can make in this case, or is there a way of getting a similar estimate for the error? What is the general rule for estimating the error of a polynomial which has the same derivative as a function, at a point where $f$ and $p$ do not necessarily agree?
A:
Let's normalize so $f(-1) = p(-1) = 0$ and $f(1) = p(1) = 0$, and say we want
$f'(a_3) = p'(a_3)$ where $a_3 \ne 0$. $p$ being quadratic, we have
$p(x) = c (1 - x^2)$ for some constant $c$, and $p'(a_3) = -2 c a_3$, i.e.
$c = - f'(a_3)/(2 a_3)$. If $a_3$ is very close to $0$ and $f'(0) \ne 0$,
this forces $|c|$ to be very large, making $p(x)$ a very poor approximation for $f(x)$. So I don't see how you can get any kind of a useful bound for the error
from $f'''$. Sharing the same derivative at a point where the function values don't have to agree does not improve the approximation.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add button and total dynamically?
How can I add a Total TextView inside ListView, as well as a Save Button under the ListView, dynamically?
Total is to be displayed similar to the following image. (note: not my data representation)
Activity A (Code Snippet)
long as=0; // for total
long bs=0;
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data)
{
// receive from B
switch (requestCode)
{
case 0:
result = data.getStringExtra("text"); // holds value (34,24)
name = data.getStringExtra("a"); // Project
as = as+Long.parseLong(result); // For total amount
Text = " " + name + " " + "RM" + result + ""; // display in listView
if (mClickedPosition == -1) {
//add new list
m_listItems.add(Text);
m_listBitmapItems.add(Global.img);
} else {
// edit listView value and image
m_listItems.set(mClickedPosition, Text);
m_listBitmapItems.set(mClickedPosition, Global.img);
}
adapter.notifyDataSetChanged();
listV.setAdapter(adapter);
break;
case 1: // Another name
result = data.getStringExtra("text");
name = data.getStringExtra("a");
description = data.getStringExtra("c");
bs = bs + Long.parseLong(result);
Log.d("FIRST", "result:" + result);
Text = " " + name + " " + "RM" + result + "";
if (mClickedPosition == -1)
{
m_listItems.add(Text);
} else {
m_listItems.set(mClickedPosition, Text);
}
adapter.notifyDataSetChanged();
listV.setAdapter(adapter);
break;
}
}
long samount = as + bs;
Toast.makeText(getActivity(), samount + "", Toast.LENGTH_LONG).show();
}
Activity A (Rendered Output)
Here I would like to add a Total TextView with value 58 (output as RM58), as well as a Save Button.
I am using the following xml files.
a.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical">
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:padding="10dp"
android:text="@string/text"
android:textSize="20sp" />
<ListView android:id="@+id/listView1"
android:layout_width="fill_parent"
android:layout_height="fill_parent" />
</LinearLayout>
claims.xml (inside the listView)
<?xml version="1.0" encoding="utf-8"?>
<AbsoluteLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical">
<TextView android:id="@+id/textView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:textAppearance="?android:attr/textAppearanceMedium" />
<ImageView android:id="@+id/imageView4"
android:layout_width="101dp"
android:layout_height="50dp" />
</AbsoluteLayout>
Thanks.
A:
Add the following footer.xml file to res/layout http://developer.android.com/reference/android/widget/ListView.html
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent
android:orientation="vertical" >
// Format as you please
<TextView
android:id="@+id/tv"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:gravity="center_horizontal"
android:text="Total" />
</LinearLayout>
In your activity create and initialise the following variables.
LayoutInflater inflater = getLayoutInflater();
ViewGroup footer = (ViewGroup) inflater.inflate(R.layout.footer, listView,
false);
// You can add this when an item is added and remove it if the list is empty.
listView.addFooterView(footer, null, false);
tv = (TextView)footer.findViewById(R.id.tv);
// Update the text.
tv.append("t/"+ results.toString());
As mentioned in the comments, you can either use long total as a static class member and update it with the results value total+=results; taking care to reset the value to zero or decrement it if an item is removed from the list.
The other way is to loop through the items in your list, parsing the items to the object and getting the particular value of type long from each object and summing them as you go.
As you are now able to dynamically add your button, I'll add briefly for other users browsing, set the buttons visibility to GONE, so that the element does not displace the layout when it is not visible, HIDDEN makes the element not visible, but the space taken by the element affects the layout (and sometimes this is useful). Then the button visibility is dynamically changed to VISIBLE when an item is added to the list, and back to GONE when the list is cleared.
<Button
android:id="@+id/btn"
android:text="button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:visibility="gone"/>
http://developer.android.com/reference/android/transition/Visibility.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Column ambigously defined in subquery with join on another subquery
I keep getting a column ambiguously defined error when joining two sub queries. However I have defined all my columns properly. I want to get all the data from the first query and add some data to it where available. How can this be fixed?
SELECT
sq2.month,
sq1.PRIMARY_MER_NUM ,
sq1.PRIMARY_EXT_MID ,
sq1.MER_DBA_NAM,
sq1.CLG_NUM,
sq1.ENT_NUM,
sq1.ENT_NAM,
sq1.MER_OPN_DTE,
sq1.MER_CLS_DTE,
sq1.MER_FST_DPST_DTE,
sq1.CLG_NUM ,
sq1.ENT_NUM,
sq2.gross_volume,
sq2.transaction_count
FROM
(SELECT DISTINCT
PRIMARY_MER_NUM ,
PRIMARY_EXT_MID ,
MER_DBA_NAM,
CLG_NUM,
ENT_NUM,
ENT_NAM,
MER_OPN_DTE,
MER_CLS_DTE,
MER_FST_DPST_DTE,
CLG_NUM ,
ENT_NUM
FROM
bi.t_mer_dim_na
WHERE
CLG_NUM = 7
AND ENT_NUM IN ('45810', '45811', '46849', '45948', '45824',
'46911', '45509', '46845', '48902')
) sq1
LEFT JOIN
(SELECT
TRUNC(BAT_REF_DTE, 'MM') AS month,
MER_NUM,
SUM(bat_prd_trn_dr_amt + bat_prd_trn_cr_amt) AS gross_volume,
SUM(bat_item_num) AS transaction_count
FROM
TDS.BAT_T3
WHERE
1 = 1
AND bat_ref_dte >= TRUNC(sysdate, 'MM')
GROUP BY
TRUNC(BAT_REF_DTE, 'MM'), MER_NUM) SQ2 ON sq1.primary_mer_num = sq2.MER_NUM;
A:
You have CLG_NUM and ENT_NUM selected twice in your first derived table SQL1
FROM (
select DISTINCT
PRIMARY_MER_NUM ,
PRIMARY_EXT_MID ,
MER_DBA_NAM,
CLG_NUM, --1
ENT_NUM, --1
ENT_NAM,
MER_OPN_DTE,
MER_CLS_DTE,
MER_FST_DPST_DTE,
CLG_NUM, --2
ENT_NUM --2
from bi.t_mer_dim_na
That makes selecting sql1.CLG_NUM and sql1.ENT_NUM ambiguous in your outer select (where you also select both twice)
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I fix this? Python list with input
I need to create a code that if you input 1, it will show you the actual list, that is empty, then if you put 2 you'll be able to add "x" to the list, if you press 3 you can remove in the list, and the last thing is if you press 9, you exit python.
Here's the code:
list = []
if input == "1":
print list
if input == "2":
list.append("hi")
print list
if input == "3":
list.remove("hi")
print list
if input == "9":
sys.exit()
I´ll be glad if someone helps me.
A:
You can try this .
I thought you meant to take input every-time , so there is an infinite while loop { If I am wrong ,and you meant to take input only once then you can just remove the while loop and it will work fine , then too }.
Also what i thought was from removing an element from list is , you meant to remove the latest element which was inserted in the list , so this code will remove the latest element which was inserted.
Also if there is any input other than {1,2,3,9} , the following code will take care of it.
Hopefully , this will solve your problem .
import sys
arr = [] #creates an empty list.
while True:
inp = raw_input("Enter the number you want to enter {1,2,3,9} :- ")
if inp == "1":
print arr;
elif inp == "2":
temp = raw_input("Enter what you want to add to the list :- ")
arr.append(temp)
print arr
elif inp == "3":
arr = arr[:-1] ## will remove the last element , and will also handle even if there is no element present
print arr
elif inp == "9":
sys.exit()
else: #Any input other than {1,2,3,9}
print "Please enter the input from {1,2,3,9}"
| {
"pile_set_name": "StackExchange"
} |
Q:
windows phone 7.5 mango app size limit
What is the size limit for an mango windows phone app?
i know that the size limit for a NoDo app is 225 mb. Does the limit remain?
A:
As far as I know yes, there havent been any announcements I came across that addressed different limits for your App Sizes,
http://msdn.microsoft.com/en-us/library/hh184844(v=VS.92).aspx
however there have been changes made to the amount of memory they are allowed to use on devices with 256mb or less the limit is 90mb of RAM and I am unsure how high the limit would be for devices with more than 256mb or RAM
http://msdn.microsoft.com/en-us/library/hh184840(v=VS.92).aspx
There is no Limit on the space of files taken in IsolatedStorage, ApplicationStorage which is temporary can be up to 4mb in size and will be lost when the application is Closed
| {
"pile_set_name": "StackExchange"
} |
Q:
Which USB floppy drives can read HFS-formatted 400K & 800K floppies?
Does anyone know of a USB floppy drive model that can read old 400K & 800K Classic Mac OS (HFS) floppies? I have an Iomega Floppy Plus, but it can only read 1.44MB HFS floppies.
A:
It looks like you are going to need to use an older Mac to read the disks, due to the complexities involved with the variable speed drives used to format those disk, unless you want to re-program your floppy drive. I am not sure if you can do it with the Iomega drive especially since its USB, you would probably have to take it apart to do the next option.
According to this forum there is a way to control a floppy drive to get it to read those 400/800K disks. So it looks like you need a internal floppy drive and KryoFlux a USB-based floppy controller. Note This is going to require some hacking fun to accomplish...
If that is not your thing, then you should find an old Mac with a floppy drive to read the files and move them to more modern media via networking / apple-share etc. I am not sure what model of macs with floppy drives could read and write 400K and 800K disks, so there is another question to ask, and here is the answer Floppy Drive Observations: A Compleat Guide to Mac Floppy Drives and Disk Formats. So it looks like any Mac with a SuperDrive 1.44 MB type floppy drive will work at reading those older 400K and 800K diskettes.
FYI, a really good source on this information relating to the Mac 400/800K drives. Working with Macintosh Floppy Disks in the New Millennium
| {
"pile_set_name": "StackExchange"
} |
Q:
WordPress get_post_galleries_images( $post->ID ) is only showing thumbnail images
WordPress get_post_galleries_images( $post->ID ) is only showing thumbnail images. The image uploaded from editor to gallery is re-sized to different sizes and the function get_post_galleries_images( $post->ID ) is showing the 150x150 size image. How can I show the full sized uploaded image. Please help.
A:
It's a duplicate of get_post_gallery_images returns thumbs . I want full size
You need to add size="full" attribute to the gallery shortcode in the
post content like
[gallery ids="836,830,829" size="full"]
| {
"pile_set_name": "StackExchange"
} |
Q:
Sort a column in Worksheet using Python
In the below program i have created a workbook which contains a worksheet named sort
where i have placed words in one column and Numbers in another column
Now i have successfully outputed the .xlsxv file
But i need the numbers should be sorted from DESCENDING TO ASCENDING ORDER.
I don't know how to place the code for that.
Code
=====
import csv
import xlsxwriter
import re
workbook = xlsxwriter.Workbook('wordsandnumbers.xlsx')
worksheet = workbook.add_worksheet('sort')
with open('sort.csv') as f:
reader = csv.reader(f)
alist = list(reader)
worksheet.write(2,0,'words')
worksheet.write(2,1,'Numbers')
newlist = []
for values in alist:
convstr = str(values)
convstr = convstr.split(",")
newlist.extend(convstr)
a=3
for i in range(3,10):
newlist[a] = re.sub('[^a-zA-Z]','',newlist[a])
worksheet.write(i,0,newlist[a].strip('['))
a=a+1
newlist[a] = re.sub('[^0-9]','',newlist[a])
int(newlist[a])
worksheet.write(i,1,newlist[a])
a=a+1
workbook.close()
The Output i'm getting in .xlsx sheet is :
Needed output:
(The corresponding words which is in the same row of number should also be sorted)
A:
I would recommend loading your original csv as a dataframe and then sorting it by a particular column. I've provided a fully reproducible example below that illustrates this.
I make my own version of sort.csv for demonstration purposes, then read it in as a dataframe using pandas.read_csv, and then sort using pandas.DataFrame.sort_values.
import pandas as pd
sort = open('sort.csv', 'w+')
sort.write('May, 5227\n')
sort.write('June, 417\n')
sort.write('Jan, 4\n')
sort.write('Feb, 424\n')
sort.write('Dec, 36\n')
sort.write('Mar, 4981\n')
sort.write('Apr, 3460\n')
sort.close()
df = pd.read_csv('sort.csv', names = ['words', 'Numbers'])
df = df.sort_values(['Numbers'], ascending=[False])
writer = pd.ExcelWriter('wordsandnumbers.xlsx', engine='xlsxwriter')
df.to_excel(writer, index=False, startrow=2)
writer.save()
Outputted sort.csv:
Outputted wordsandnumbers.xlsx:
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get a job in Australia before I go there?
I have been granted permanent residency in Australia (i.e. I can live and work there). I intend to relocate from Ireland in October. Ideally I could have a job line up before I go, as it's a high-risk and potentially expensive move if I have to support myself and family for a period of unemployment.
Is it possible / likely that I could secure employment before I go? What are the best ways that I could go about doing this? Should I just start applying for jobs on the Australian jobs websites? Should I contact recruiters directly? (this isn't exactly appealing because recruiters tend to ignore you if it makes their job difficult)
For what it's worth, I have a BSc in Computing (Software Development), with 8 years experience in .Net development. I'd prefer Melbourne if possible.
A:
I'm an IT worker (software dev / test automation). I moved to Australia last year.
Seek.com.au is the biggest jobs site (disclaimer: I'm contracting there at present). If you create a profile on there, it'll go a long way - recruiters will also contact you as a result of your profile being active and searchable.
For random 'spot jobs', Spotjobs is up and coming, but probably not ideal for you.
It's always hard to get a job before the companies have seen you, but if your linkedin profile is up to date, you have an available phone number and email on your profiles, you stand a better chance.
In addition, join a meetup for the city you're aiming for, and contact some of the people on that - software developers, Agile groups, geek groups, and so on. Many of them will know of opportunities, and given they may get bonuses for referring someone, may be very keen to help.
Get yourself onto many recruiters' books. Don't harass them, but a simple email with your resume and contact details should suffice, and then check back in when you arrive in the country if nothing has happened yet.
Furthermore, make it VERY clear on all your applications, profiles and contacts that you have permanent residency and the right to work in Australia. This is one of the highest priorities that IT recruiters are looking for (outside of the obvious skills matches).
A:
It's certainly possible to arrange employment before arriving in your new country, although employers are likely to want to meet you in person before actually extending an offer (particularly if the position is not a remote working position).
Before moving to New Zealand some years ago, I had located a job offer on one of the job web sites, and did a telephone interview before arriving in the country. After arriving, I did an in-person interview and everything proceeded from there.
Employers of expats will generally want to be sure that the prospective employee has the right to work in the country. In your case, you have been granted permanent residency with the right to work but you aren't actually in the country yet. It is possible (though unlikely) that you could be turned away at the border and not admitted to Australia. An employer will probably want to make a copy of your passport with the work authorisation for their records, which they can only do with you present.
Talking to other workers in your field and establishing a network of contacts will certainly be useful, and is something you can do from anywhere. For example, the mailing list for the local Python User's Group often has introductions from migrants who haven't actually arrived in the country yet. It's obviously not a job search service, but such contact can be useful in locating technology-specific jobs.
Finally, an option to cover the uncertain period would be to secure a remote working contract with somebody, perhaps in your home country, where you can do some work for a short time until you find local employment.
| {
"pile_set_name": "StackExchange"
} |
Q:
Question on how to remove a Visual Studio Breakpoint
Let's say I have 10 breakpoints and I want to clear one but not the other 9.
If I toggle the breakpoint on the one that I want to remove, it is resurrected the next time I restart the app. The only way that I know to permanently get rid of it is to clear ALL the breakpoints, which I would rather not do since I would have to reset the other 9.
Is there a better way in ANY VS version?
A:
The breakpoint's state is only temporarily altered if you change it while you're debugging and it's a multi-bound breakpoint. This is actually a feature of Visual Studio. See the last post here.
If you're not debugging, and you remove it, then it won't come back. Alternately, as others have suggested, you can remove it permanently using the breakpoint management window.
A:
Hitting Ctrl+Alt+B will bring up a list of all breakpoints in your solution, from which you can manually toggle or delete them with a right-click.
A:
Open the breakpoints window (Debug -> Windows -> Breakpoints), select the breakpoint you want to delete and press the delete key (or click the cross icon).
If you are toggling the breakpoint using the keyboard (F9 using my keyboard mappings), it sometimes doesn't remove it properly. Pressing F9 again will remove it fully (this is due to the breakpoint being set on multiple threads and toggling it whilst debugging only disables the main breakpoint but not the ones for the other threads).
| {
"pile_set_name": "StackExchange"
} |
Q:
Error while setting up pouchdb-authentication (ionic)
I am new to angular and ionic. I was trying to setup pouchdb-authentication and got this error. I must have made some mistake somewhere but i could not fiqure it out. Please help. Before posting i searched the issue for 2 days but did not find the solution.
in app.js:
var localDB = new PouchDB('testLogin');
var remoteDB = new PouchDB('http://localhost:5984/testLogin', {skipSetup: true});
and in my service:
remoteDB.signup(userName, password, {
metadata : {
email : email
}
}, function (err, response) {
if(err){
console.log("failure signup");
} else{
console.log("success signup");
}
});
TypeError: remoteDB.signup is not a function
at Object.self.signup (/localhost:8100/js/services/authentication.js:40:13)
at Scope.$scope.signup (/localhost:8100/js/controllers/authentication.js:41:16)
at fn (eval at <anonymous> (/localhost:8100/lib/ionic/js/ionic.bundle.js:26457:15), <anonymous>:4:320)
at callback (/localhost:8100/lib/ionic/js/ionic.bundle.js:36610:17)
at Scope.$eval (/localhost:8100/lib/ionic/js/ionic.bundle.js:29158:28)
A:
It seems that the plugin is not actually installed. If you are using <script> tags, then you just need to make sure that the authentication script is declared after the PouchDB script. Otherwise you need to do PouchDB.plugin(require('pouchdb-authentication')).
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the significance of this character being given a name?
In Solo: A Star Wars Story, we witness
Han being given the name "Solo" during his application for Imperial flight training. More specifically, the officer asks Han who his "people" are, to which Han replies that he doesn't have any. The officer pauses, then decides to write down "Solo", as though not having a "people" was out of the ordinary, and that he had to come up with something on the spot to enroll him.
But considering the nature of the planet (tons of runaways, for example), I would have expected this to be a fairly common occurrence for people trying to "get away". Either enough for them to not require it, or at least have some default. Is being given a name in this fashion common, or is this the only time we see this happen in canon?
A:
Before the recruiter asks Han who his "people" are, he first asks Han for a surname, which Han refuses to give. We know that Han did live with his father when he was younger, as he tells Lando that his father worked at a factory building YT-1300 freighters, and that his father's dream had been to become a pilot. So presumably Han inherited his father's surname, if that's the norm on Corellia.
However, in the course of that conversation, Han tells Lando that he doesn't have a good relationship with his father - to which Lando commiserates, so Han feels no need to explain further. As such, it seems that we're meant to infer that Han doesn't want to be connected with his birth family - presumably his difficult relationship with his father led him to take an opportunity to leave his family name behind.
A:
According to Solo: Tales from Vandor (a canon book containing various legends that are told about Han Solo and the protagonists of the Solo film) the Imperial 'intake forms' require a surname without exception. Solo is a common workaround for individuals who have no second name or are unwilling to share it.
Then just the other night, this retired Imperial intake officer said the Empire's standard Military intake form requires a last name, and the computer system rejects any application if the last name is left blank. So if you come from a culture where people only have one name, Joining the Empire means getting a second one whether you like it or not. She said different intake officers -fill in different last names, and two of the most common choices are NA - for 'not applicable' - and SOLO.
If that’s what happened to Han, I guess he's lucky. He could have become known as Han Na
Han seems to consider his name to be a mononym. His entire family are dead and he doesn't feel sufficiently connected to anyone else (aside from Qi'ra) to use their family name as a replacement.
The man waited, but Han didn’t say anything else. “Han what?” he asked.
Han frowned, confused. Han was his name. It had always been his name. He didn’t have another.
“Who are your people?” the man pressed.
His people? His family was gone. The White Worms hadn’t been family. The closest thing he had was Qi’ra and he didn’t know her people’s name, either. Neither seemed to be the answer the officer was looking for.
He shrugged. “I have no people. I’m alone.” The words hurt more than he had expected.
Solo: A Star Wars Story: Expanded Edition
In the film's junior novel, the implication is more that he doesn't feel that there's anyone who's worthy of him. By refusing a second name, he's cutting his ties with Corellia.
“Han. Han what?” The recruiter gazed at him impatiently, eyebrows raised, fingers poised over the keypad to complete the application form. “Who are your people?”
Glancing over his shoulder, Han cast one last look at the world he was leaving behind. And Qi’ra. That final image of her—her face on the other side of the glass, her voice giving him permission to leave as Rebolt and the others pulled her away from him—would be burned into his brain forever.
Solo: A Star Wars Story: Junior Novel
| {
"pile_set_name": "StackExchange"
} |
Q:
angular-cli ng serve --- how can i stop its regular refresh
After running ng serve,the browser will regularly refresh and the page will be reloaded.How can I stop it,so that I can debug.I see that it is the result of 'Websocket',so what configuration can i make to stop this regularly refresh.
A:
You can use ng serve --live-reload=false
See here for more options you can provide to the command:
https://github.com/angular/angular-cli/blob/master/docs/documentation/serve.md
| {
"pile_set_name": "StackExchange"
} |
Q:
Highchart - Tooltip prevents click
I have a stacked bar chart. there is a problem with the tooltips. One cannot click an item hidden by a tooltip. The simple solution is to remove the tooltip - but it's useful so I would rather keep it.
Ideally I would like to be able to click the underlying part of the chart through the tool tip. Another solution I thought of would be to have the tool tip move away as the mouse approaches it.
jsfiddle.net/MAYO/cm5roecm/2/
A:
You can use tooltip.followPointer set to true to prevent the cursor from ever overlapping the tooltip.
Example code: (updated JSFiddle):
tooltip: {
followPointer: true
}
| {
"pile_set_name": "StackExchange"
} |
Q:
newcommand with argument produces unwanted extra space within tikzpicture
I am creating some pictures I want to use within nodes. When I use a \newcommand with an argument, extra space is added and I can't get rid of it.
\documentclass{article}
\usepackage{tikz}
\begin{document}
\newcommand{\foo}[1][3]{
\begin{tikzpicture}
\foreach \x in {0,1,...,#1}
\filldraw (\x,0) circle (0.2cm);
\end{tikzpicture}
}
\newcommand{\test}{
\begin{tikzpicture}
\foreach \x in {0,1,...,3}
\filldraw (\x,0) circle (0.2cm);
\end{tikzpicture}
}
\begin{tikzpicture}
\node at(0,0) [draw, rectangle, inner sep=0] {\foo};
\node at(0,-0.5) [draw, rectangle, inner sep=0] {\test};
\end{tikzpicture}
\end{document}
A:
Remove spurious blank spaces:
\newcommand{\foo}[1][3]{%
\begin{tikzpicture}
\foreach \x in {0,1,...,#1}
\filldraw (\x,0) circle (0.2cm);
\end{tikzpicture}%
}
\newcommand{\test}{%
\begin{tikzpicture}
\foreach \x in {0,1,...,3}
\filldraw (\x,0) circle (0.2cm);
\end{tikzpicture}%
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Set application path in visual studio
I've got to make some changes to an app that has many hard-coded paths. These paths are based on the assumption that the application path is null. But when I run locally in visual studio the app path is something else. So on the production server the name might be "http://example.com/default.aspx", while locally it is something like "http://localhost:1234/myapp/default.aspx".
Is there any way to set the application path in visual studio, so I can set it to null?
And yes, yes, I know that the "right answer" is to eliminate the hard-coded paths. Long term, I'd love to do that. But that would be a lot of work for a modest change, and then I'd have to test everything in sight to make sure I didn't screw something up.
A:
In the webapp project properties, under Web, in the Servers section you can select the server and the app path on that server. If you use Visual Studio Development Server, the default path is "/".
Also, when you publish your app, you can select profiles with their relative target location (right-click project and Publish).
Lastly, you can use the ~ (tilde) in front of your relative hardcoded paths to signify that they belong to the root webapp folder.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where are the Gnome Fallback Configuration User Files?
I am trying to find the Gnome Panel layout File for the user. Under the User's Folder I'm not finding it under .gnome2 or .gconf or anything. I found the Default XML file under /usr/share/gnome-panel/panel-default-layout.layout. Any Ideas? I'm using Ubuntu 11.10.
A:
Gnome3 uses DConf instead of GConf. The Gnome Panel settings are at org.gnome.gnome-panel.
As DConf saves its data in binary form to improve performance you can't edit the settings by hand but need to use the command line tools gsettings (package libglib2.0-bin) or dconf (package dconf-tools) or the GUI dconf-editor (package dconf-tools) to view or edit them.
| {
"pile_set_name": "StackExchange"
} |
Q:
Queue is empty even though there are items in it
I have a django app in which I'm using Queue.
class DataThread (threading.Thread):
data_queue = Queue.Queue()
In another thread I have stuff being added to the queue like below
DataThread.data_queue.put(msg)
And it is being consumed in DataThread in the below fashion
while True:
self.sendMessagesFromQueue()
time.sleep(1)
def sendMessagesFromQueue(self):
try:
while not DataThread.data_queue.empty():
data = self.data_queue.get()
#Some processing logic
except Exception as e:
print str(e)
Now this works when I run it using python manage.py runserver (using django itself).
But it doesn't work when I use gunicorn to run this app. Because the queue empty check keeps returning its empty but when i check queue size at the location I add to the queue, it keeps building up.
EDIT: I even checked if the queues are the same. They have the same memory locations. Also I'm not sure if this helps, but these threads are created before gunicorn starts. I'm making this in a sitecustomize.
A:
So it seems that gunicorn forks and creates multiple worker processes unlike Django's webserver that only creates threads. Since all of the above question is done before gunicorn forks, the gunicorn fork's copy over the address space but not the threads. Therefore, the Queues will exist at the same virtual address and will be there in each process created by gunicorn and data will be put into each queue in those newly created worker processes. But, the consuming or the Queue.get() happens only in the initial process where there is no Queue.put taking place.
I fixed it by using multiprocessing.Queue instead. It worked like a charm.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits