text
stringlengths 0
13M
|
---|
Title: PhoneGap - Not able to play local sound file
Tags: android;cordova;android-mediaplayer
Question: I am working with PhoneGap with an Android device.
I am trying to play a local .wav file but am getting some errors.
I am using the example code from
```http://docs.phonegap.com/en/1.0.0/phonegap_media_media.md.html
document.addEventListener("deviceready", onDeviceReady, false);
// PhoneGap is ready
//
function onDeviceReady() {
playAudio("sounds/menu-change.wav");
}
// Audio player
var my_media = null;
// Play audio
function playAudio(src) {
// Create Media object from src
my_media = new Media(src, onSuccess, onError);
// Play audio
my_media.play();
}
function onSuccess() {
console.log("playAudio():Audio Success");
}
function onError(error) {
alert('code: ' + error.code + '\n' +
'message: ' + error.message + '\n');
}
```
I am getting the following error
```09-29 16:06:49.934: D/CordovaLog(14944): JSCallback Error: Request failed.
09-29 16:06:49.934: D/CordovaLog(14944): file:///android_asset/www/js/cordova-2.0.0.js: Line 3698 : JSCallback Error: Request failed.
09-29 16:06:49.934: I/Web Console(14944): JSCallback Error: Request failed. at file:///android_asset/www/js/cordova-2.0.0.js:3698
09-29 16:06:49.944: D/DroidGap(14944): onMessage(onPageStarted,file:///android_asset/www/settings.html)
09-29 16:06:49.964: I/AudioSystem(14944): getting audio flinger
09-29 16:06:49.974: I/AudioSystem(14944): returning new audio session id
09-29 16:06:50.014: E/MediaPlayer(14944): error (1, -438.238.0885)
09-29 16:06:50.014: W/System.err(14944): java.io.IOException: Prepare failed.: status=0x1
09-29 16:06:50.014: W/System.err(14944): at android.media.MediaPlayer.prepare(Native Method)
09-29 16:06:50.014: W/System.err(14944): at org.apache.cordova.AudioPlayer.loadAudioFile(AudioPlayer.java:544)
09-29 16:06:50.014: W/System.err(14944): at org.apache.cordova.AudioPlayer.readyPlayer(AudioPlayer.java:468)
09-29 16:06:50.014: W/System.err(14944): at org.apache.cordova.AudioPlayer.startPlaying(AudioPlayer.java:212)
09-29 16:06:50.014: W/System.err(14944): at org.apache.cordova.AudioHandler.startPlayingAudio(AudioHandler.java:232)
09-29 16:06:50.014: W/System.err(14944): at org.apache.cordova.AudioHandler.execute(AudioHandler.java:75)
09-29 16:06:50.014: W/System.err(14944): at org.apache.cordova.api.PluginManager$1.run(PluginManager.java:192)
09-29 16:06:50.014: W/System.err(14944): at java.lang.Thread.run(Thread.java:1027)
09-29 16:06:50.494: D/CordovaLog(14944): JSCallback: Message from Server: cordova.require('cordova/plugin/Media').onStatus('abf853b7-3e45-f906-fc59-92d9f11e9a14', 1, 0);
09-29 16:06:50.494: D/CordovaLog(14944): file:///android_asset/www/js/cordova-2.0.0.js: Line 3669 : JSCallback: Message from Server: cordova.require('cordova/plugin/Media').onStatus('abf853b7-3e45-f906-fc59-92d9f11e9a14', 1, 0);
09-29 16:06:50.504: I/Web Console(14944): JSCallback: Message from Server: cordova.require('cordova/plugin/Media').onStatus('abf853b7-3e45-f906-fc59-92d9f11e9a14', 1, 0); at file:///android_asset/www/js/cordova-2.0.0.js:3669
09-29 16:06:50.504: D/CordovaLog(14944): JSCallback Error: TypeError: Cannot read property 'statusCallback' of undefined
09-29 16:06:50.504: D/CordovaLog(14944): file:///android_asset/www/js/cordova-2.0.0.js: Line 3670 : JSCallback Error: TypeError: Cannot read property 'statusCallback' of undefined
09-29 16:06:50.504: I/Web Console(14944): JSCallback Error: TypeError: Cannot read property 'statusCallback' of undefined at file:///android_asset/www/js/cordova-2.0.0.js:3670
```
It is saying it requires the media plugin, but I have it enabled in my config.xml file
```<plugin name="Media" value="org.apache.cordova.AudioHandler"/>
```
Any help would be much appreciated.
Thanks
Here is another answer: When using an sound file located in your www folder you'll need to use the following file path format on Android:
```new Media("/android_asset/www/{directory}/{file-name}.mp3");```
|
Title: Post method returns code 302 Laravel but not When removed verifycsrftoken
Tags: php;amazon-web-services;laravel-5.2
Question: I have started a project in Laravel 5.2. I pushed code on server then trying to login and it is getting redirected to login page with ```Status Code: 302 Found```. The same code is working on Local.
Comment: Put your code, need to check how to ask on stackoverflow
Comment: Thanks for reply. I have solved that issue. next time before asking will surely check and post code too.
|
Title: Tell libavcodec/ffmpeg to drop frame
Tags: c;ffmpeg;video-encoding;libavcodec
Question: I'm building an app in which I create a video.
Problem is, sometime (well... most of the time) the frame acquisition process isn't quick enough.
What I'm currently doing is to skip the current frame acquisition if I'm late, however FFMPEG/libavcodec considers every frame I pass to it as the next frame in line, so If I drop 1 out of 2 frames, a 20seconds video will only last 10. More problems come in as soon as I add sound, since sound processing is way faster...
What I'd like would be to tell FFMPEG : "last frame should last twice longer that originally intended", or anything that could allow me to process in real time.
I tried to stack the frames at a point, but this ends up killing all my memory (I also tried to 'stack' my frames in the hard drive, which was way to slow, as I expected)
I guess I'll have to work with the pts manually, but all my attempts have failed, and reading some other apps code which use ffmpeg, such as VLC, wasn't of a great help... so any advice would be much appreciated!
Thanks a lot in advance!
Here is another answer: your output will probably be considered variable framerate (vfr), but you can simply generate a timestamp using wallclock time when a frame arrives and apply it to your AVFrame before encoding it. then the frame will be displayed at the correct time on playback.
for an example of how to do this (at least the specifying your own timestamp part), see doc/examples/muxing.c in the ffmpeg distribution (line 491 in my current git pull):
```frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
```
here the author is incrementing the frame timestamp by 1 in the video codec's timebase rescaled to the video stream's timebase, but in your case you can simply rescale the number of seconds since you started capturing frames from an arbitrary timebase to your output video stream's timebase (as in the above example). for example, if your arbitrary timebase is 1/1000, and you receive a frame 0.25 seconds since you started capturing, then do this:
```AVRational my_timebase = {1, 1000};
frame->pts = av_rescale_q(250, my_timebase, avstream->time_base);
```
then encode the frame as usual.
Here is another answer: Many (most?) video formats don't permit leaving out frames. Instead, try reusing old video frames when you can't get a fresh one in time.
Comment for this answer: Most containers actually DO support leaving out frames. MP4 only stores a duration for each frame, which doesn't have to be the same for all frames. mkv stores presentation timestamps. IDK about avi; maybe not, but AVI isn't ideal for modern codecs anyway. (there aren't that many general-purpose video containers, not counting obscure ones like ogm, or REALLY obscure ones like nut.)
Here is another answer: There's this ffmpeg command line switch ```-threads ...``` for multicore processing, so you should be able to do something similar with the API (though I have no idea how). This might solve your problem.
Here is another answer: Just an idea.. when it's lagging with the processing have you tried to pass to it the same frame again (and drop the current one)? Maybe it can process the duplicated frame quickly.
Comment for this answer: Well it improves a bit, but the video is still way too fast...
I guess this comes from my scheduling algorithm (which I can't call an algorithm to be trully honest...)
I'll try to improve it tomorrow and post back if this improves something.
Thanks for the help anyhow (both you and duskwuff)!
|
Title: convertion/Flattening of Some(Array(Array(Array(String)))) to Array(String)
Tags: scala;user-defined-functions;scala-collections
Question: I have an ```Option[Array[Array[Array[String]]]]``` and I want to convert to it ```Array[String]``` or Atleast ```Some(Array[String])```.
I have tried with ```.flatten``` method.
I can print using ```.map(_.map(_.map(_.foreach(print))))``` but want to store this printed thing as List.
Expectation: ```Array[String]``` or ```Some(Array(String))```.
Comment: In order to provide a [mcve] it's better to also add an example of a data set you are trying to convert.
Comment: Please format your code when asking your question.
Here is another answer: If you know exactly the Structure you can do:
For ```Option[Array[String]]```
```myArray.map(_.flatten.flatten)
```
For ```Array[String]```:
```myArray.toArray.flatten.flatten.flatten
```
Comment for this answer: I added also a version for `Array[String]`
Here is another answer: To convert ```Option[Array[Array[Array[String]]]]``` to ```Option[Array[String]]``` do this:
```.map(_.flatten.flatten)
```
To print the data inside the result, do this
```.foreach(_.foreach(println))
```
|
Title: Create a batch file to load a webpage when the date and time are correct
Tags: datetime;batch-file;webpage
Question: I am trying to get a webpage to load when the date and time are correct. The code I have come up with is:
```@echo off
:repeat
set CurrentTime=%time:~0,2%.%time:~3,2%
set CurrentDate=%date:/=-%
echo.
echo %CurrentDate% %CurrentTime%
echo.
IF %CurrentTime% == Tue 08-04-2014 13.04 goto load
timeout /t 1 >nul
goto repeat
:load
start/MAX iexplore.exe"" "http://www.youtube.com.au"
timeout /t 6 >nul
```
It will work if I remove the CurrentDate and the date from the IF statement but it won't if I don't. I do need the date and time to work.
Thanks.
Here is another answer: Aditionally to the solution that Magoo has posted, note that ```%DATE%``` returns the current date using the short date format that is fully and endlessly customizable by the users. One user may configure its system to return ```Tue 08/04/2014``` and another user may choose ```Apr8th14```. It's a complete nightmare for a BAT programmer. Even in the same computer. This is also true for the ```~t``` modifier when expanding a variable containing a filename.
There are several solutions, google a bit to find them. The easiest in my opinion is to use WMIC
``` WMIC Path Win32_LocalTime Get Day,Hour,Minute,Month,Second,Year /Format:table
```
returns the date in a convenient way to directly parse with a ```FOR``` command.
``` FOR /F "skip=1 tokens=1-6" %%A IN ('WMIC Path Win32_LocalTime Get Day^,Hour^,Minute^,Month^,Second^,Year /Format:table') DO (
SET /A currentdate=%%F*10000+%%D*100+%%A
)
IF "%currentdate"=="20140408" goto load
```
Here is another answer: You can also use a scheduled task if that suits the reason for you wanting to load the page.
Here is another answer: ```IF "%date% %Time%"=="Tue 08-04-2014 13.04" goto load
```
shoud work for you. You need to use the "quotes" to group the string as a single entity, otherwise ```IF``` has no way of telling whether the ```==``` is a part of the string to be compared or it's the comparison operator. (and that assumes that your date format is ```Tue 08-04-2014``` and time is ```13.04```)
|
Title: clean up the dataset
Tags: r
Question: I have this dataset:
sample:
```x=rnorm(45)
std_d=sd(x)
```
Now x looks like:
``` [1] -0.08059702 0.90403763 -0.18618130 -0.48590834 1.23714656 1.02248570
[7] -0.28970333 -0.19626563 0.89060697 0.87530362
```
Let p=abs(x[i] - x[i+1]). I want to put NA in place of values of x if p> sd(x). It should be done in the way that it should check x[i] -x[i+1] initially. if this does not satisfy the condition check for next i.
Now if the condition satisfies, it should put NA for x[i+1].
Then next time p should be p= x[i] - x[i+2]. It should skip the NA value and keep the first term of p (x[i]) same until the condition is not satisfied. Once this happens, first term should become the term next to NA value and 2nd term becomes the term next to first term.
I think this can be done with combination of if else and for loop. But I am not able to figure out the algorithm even after trying hard. I require help on this.
Thank you for your consideration.
Comment: ok. Lets consider this this:
This is what should happen:
1. abs(x[1]-x[2])>sd then x[2]=NA.
2. abs(x[1]-x[3])sd then x[5]=NA
5. abs(x[4]-x[5]<sd..and so on.
Comment: What have you tried? `p <- abs(diff(x, lag=1))` will get your values for p. I don't understand much of what happens after that... maybe you could use `set.seed` and walk through a few iterations of how you expect the calculation to go?
Here is the accepted answer: There has got to be a better way... but in horrible c style:
```x <- c(-0.08059702, 0.90403763, -0.18618130, -0.48590834, 1.23714656, 1.02248570, 0.28970333, -0.19626563, 0.89060697, 0.87530362)
std_d <- sd(x)
for(i in seq_along(x)) {
if(is.na(x[i])) next
ctr <- i
while(ctr < length(x)) {
if(abs(x[i] - x[ctr+1]) > std_d) {
x[ctr+1] <- NA
ctr <- ctr + 1
std_d <- sd(x, na.rm=TRUE)
} else {
break
}
}
}
```
If you're setting things to ```NA```, ```sd(x)``` is changing so I included that too...
Comment for this answer: Thanks. this is what I was looking for.
Here is another answer: ```is.na(x) <- c(FALSE, abs(diff(x)) > sd(x) )
#Pass two: Here your description could use a set.seed and a desired result.
> X1 <- x
> is.na(X1) <- c(FALSE, abs(diff(X1)) > sd(x) )
> X1
[1] NA -0.21797491 -1.02600445 -0.72889123 -0.62503927 NA NA 0.15337312
[9] NA NA 0.42646422 -0.29507148 NA 0.87813349 0.82158108 0.68864025
386-652-4785 -0.06191171 -0.30596266 -0.38047100 -0.69470698 -0.20791728 NA NA
[25] NA NA -0.40288484 -0.46665535 NA -0.08336907 0.25331851 -0.02854676
[33] -0.04287046 NA NA NA NA NA 0.12385424 0.21594157
[41] 0.37963948 NA -0.33320738 -1.01857538 -1.07179123
> X2 <- X1
> is.na(X2) <- c(FALSE, FALSE, abs(diff(X2, lag=2)) > sd(x) )
> X2
[1] NA -0.21797491 -1.02600445 -0.72889123 -0.62503927 NA NA 0.15337312
[9] NA NA 0.42646422 -0.29507148 NA NA 0.82158108 0.68864025
386-652-4785 -0.06191171 -0.30596266 -0.38047100 -0.69470698 -0.20791728 NA NA
[25] NA NA -0.40288484 -0.46665535 NA -0.08336907 0.25331851 -0.02854676
[33] -0.04287046 NA NA NA NA NA 0.12385424 0.21594157
[41] 0.37963948 NA -0.33320738 -1.01857538 -1.07179123
```
|
Title: I keep showing a "Hidden Network" in a rural area. I am not the one broadcasting it...that I know of. What is causing it?
Tags: networking;wireless-networking;wireless-router
Question: I have an old Netgear WND router with the 5GHz radio DISABLED and the 2.4GHz broadcasting its SSID, which shows up along with a "Hidden Network". I have ONE neighbor that is about 50 yards away and I see his NETGEAR99 router with low signal occasionally.
Heres the really weird part (and I realize how it sounds) - if I power off my Netgear router, the Hidden Network signal goes from full strength to a single bar.
Comment: Use WireShark to grab the MAC of the Hidden Network and compare it against the MACs from your router and any device capable of broadcasting an Ad Hoc or Hotspot network. If the MACs don't match any of the devices, it could be a myriad of things, including long-range WiFi broadcast via a commercial long-range antenna (range of several miles, and is one way mass events provide public WiFi coverage over a large area).
Comment: It seems reasonable that the device is somewhere on your property. If the channel number is the same as your main network, it may very well be a second network coming off the same router. There is a possibility you could use a deauth attack to find the SSID of the network, but a user would actually need to be connected to it. http://www.thelinuxgeek.com/content/find-hidden-ssids Do not do this on networks you do not own!
Comment: What happens when you try connecting to it? Do you have WiFi on your modem?
Comment: @HazardousGlitch - I have FTTH which plugs into my ancient Netgear WNDR3700v4 router. As far as I know, there is no WiFi functionality on the phone companies box mounted outside on my house. I am not 100% sure though. What baffles me is how powering down my router causes the signal strength to drop drastically. If the router has no power, but I still see the Hidden Net (even though only a single bar) it cant possibly be coming from the router right? What about clandestine devices? I wish I had a tool that could physically walk me toward the signal source with a direction needle.
Comment: @Andy Hello Andy, thanks for the link. This is something I thought of myself but lack the experience without ample written instruction. The other thought I had was my laptop itself. Perhaps software I installed, or maybe a virus or malware?
Here is another answer: The device might be a wireless repeater. Have you noticed any strange IPs or MAC addresses on your network? If the router is old enough and broadcasting it's name as they are prone to do at start up, it may have been compromised. You are assuming that your neighbor only has one piece of equipment. What if they put in a repeater to get better access to your network.
The sensible advice would be stay sensible, but to check your router and monitor it for strange connections. You can also use software to monitor how much bandwidth your using to see if there is a discrepancy between what your cable provider reports.
|
Title: active admin many to many show index PG181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16UndefinedColumn: ERROR: column cases.product_id does not exist
Tags: ruby-on-rails;many-to-many;activeadmin
Question: In my ```Case``` model
``` has_many :case_products, dependent: :destroy
has_many :products, through: :case_products
```
in my ```Product``` model
```class Product < ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Base
has_many :cases
end
```
in ```CaseProduct```
```class CaseProduct < ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Base
belongs_to :case
belongs_to :product
end
```
how can I display how many cases I have for each product?
in ```active admin product.rb```
```ActiveAdmin.register Product do
permit_params :id, :name ,case_ids: []
index do
column :id
column :name
column "case" do |m|
m.cases.count
end
actions
end
show do
attributes_table do
row :id
row :name
row :case
end
end
end
```
I got this error
```PG181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16UndefinedColumn: ERROR: column cases.product_id does not exist
LINE 1: SELECT COUNT(*) FROM "cases" WHERE "cases"."product_id" = $1
^
: SELECT COUNT(*) FROM "cases" WHERE "cases"."product_id" = $1
```
Here is the accepted answer: You must set associations in Product like your Case model:
```class Product < ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Base
has_many :case_products, dependent: :destroy
has_many :cases, through: :case_products
end
```
If you only use ```has_many :cases```, Rails assumes Case model has a ```product_id``` column.
|
Title: Bootstrap 3 Panels not working in rails
Tags: ruby-on-rails-3;twitter-bootstrap;twitter-bootstrap-3;panel
Question: I want create panels with Twitter Bootstrap in rails app.
```<div class="panel panel-default">
<div class="panel-heading">Panel heading without title</div>
<div class="panel-body">
Panel content
</div>
</div>
```
According to the official doc, we will get like this:
but I just get the text.
```Panel heading without title
Panel content
```
Hope someone can help me, thanks in advance!
Comment: I can't check right now, but it definitively worked some time ago. May you add exact version of the bootstrap which you are using. And try to update to the latest stable version (if you are using other one).
Comment: it works when I update to the latest version, thanks.
Comment: How are you including Bootstrap? Gem? CDN? Less? Sass?
Here is the accepted answer: Edit the Gemfile.
```# gem "twitter-bootstrap-rails"
gem 'twitter-bootstrap-rails', :git => 'git://github.com/seyhunak/twitter-bootstrap-rails.git' # install the lastest version
```
then ```bundle install``` to install the latest version.
|
Title: How should I make multiple http requests in android
Tags: android
Question: Hey I am trying to make multiple http requests and wondering the best way to perform this in android. Currently I am using an IntentService with threads, however this doesnt work too well because onhandleintent returns before the threading is complete. Should I Switch to A regular service and start my own threads in there or would asyncTask be more approiate?
Thanks
Comment: Are you starting from a service?
Comment: Currently starting from an activity and I am calling an IntentService
Here is the accepted answer: Any time you deal with threads you're going to have to deal with synchronization. For example, when onHandleIntent() is called you may need to synchronize with your HTTP request thread(s) and wait for it to complete.
If you know ahead of time you will be doing this a lot it may be worth looking at something like ThreadPoolExecutor to save a bit on the pains of creating and tearing down threads. Again you will still need to synchronize your threads if you want to wait for their termination before moving on. Android has many ways of doing this including abstractions that make it fairly trivial to implement.
Comment for this answer: I am Starting 6 threads everytime the intentService is called. Each thread is responsible for making an http request with different xml to get different data back. Then I am taking the data and inserting it into a local sqlite database. My code looks something like the folowing:
http://pastebin.com/fGqXeEbS
Comment for this answer: I suggest you take a look at ThreadPoolExecutor (https://developer.android.com/reference/java/util/concurrent/ThreadPoolExecutor.html) if you will be doing this consistently. Either way this sounds better suited for an actual service.
Here is another answer: If you're already starting new threads anyway (whether directly or indirectly), then yes you should use a ```Service``` instead of an ```IntentService```. The whole point of an ```IntentService``` is for it to handle threading for you.
|
Title: Google Cloud PubSub giving - A pull requests for subscription went to a server that is temporarily overloaded. Please try the request again
Tags: node.js;google-cloud-pubsub;gcloud-node
Question: I have 16 workers subscribed to a topic and they are using pull mechanism to retrieve the messages from the queue.
I very frequently get ```Error: A pull requests for subscription '/subscriptions/quizizz-org/socket-worker' went to a server that is temporarily overloaded. Please try the request again.``` as reply.
My first guess was that may be because of some quota limit, but I could not find any quota limit for pull requests send to a subscription.
Here is the accepted answer: The error that says the requests "went to a server that is temporarily overloaded" is not a message that indicates exceeding any quota limits. In this case, the issue is an isolated problem for which a fix should be rolled out by end of day 2/3/2017.
|
Title: DropKick.js Menu - Always open when page loads
Tags: javascript;html;css
Question: I have a problem with my dropkick menu for my responsive website. When the site enters iphone size it changes to this dropkick menu using dropkick.js, it's a dropdown.
My HTML:
```<div id="mobilemenu">
<a href="#" id="pull">Menu</a>
</div>
```
This above code is only visible if you view the site in 320px width.
My Javascript:
```<script type="text/javascript">
$(function () {
var pull1 = $('#pull');
menu1 = $('ul.menuresponsive');
menuHeight = menu1.height();
$(pull1).on('click', function (e) {
e.preventDefault();
menu1.slideToggle();
});
$(window).resize(function () {
var w = $(window).width();
if (w > 320 && menu.is(':hidden')) {
menu1.removeAttr('style');
}
});
});
</script>
```
I don't really know much about Javascript, this was taken from a tutorial.
My CSS for when the site is in 320px:
```/* Menu */
#mobilemenu { display:block !important; margin-bottom:20px; }
#mobilemenu ul { margin:12px 0 0 0 !important; list-style:none; padding:0 10px 0 10px }
#mobilemenu ul li { float:none !important; font-size:16px; padding:5px 0 5px; font-weight:bold; border-bottom:1px solid #000; }
#mobilemenu ul li a { color:#333; text-decoration:none; }
/* Drop */
#mobilemenu ul li ul li { font-size:14px; font-weight:normal; border:none; color:#000; }
/* Pull */
#pull { display:block !important; text-align:center; color:#fff; text-decoration:none; padding:10px 0 10px 0; font-size:16px; font-weight:bold; }
#menu { display:none; }
```
As it looks now, the menu is constantly open as shown below, I would very much like it to be closed by default but I can't seem to find a solution.
My menu is rendered as a ```<ul>``` and ```<li>``` dynamically inside the ```<div id="mobilemenu">```
Here is the accepted answer: In the tutorial you link to the demo example has two media queries - one for 515px and one for 320px.
The css for the 515px will be inherited by the 320px one and it contains the code you need to close the menu I think.
Try adding this code into your media query:
```nav {
border-bottom: 0;
}
nav ul {
display: none;
height: auto;
}
nav a#pull {
display: block;
background-color: #283744;
width: 100%;
position: relative;
}
nav a#pull:after {
content:"";
background: url('nav-icon.png') no-repeat;
width: 30px;
height: 30px;
display: inline-block;
position: absolute;
right: 15px;
top: 10px;
}
```
Comment for this answer: Can you replicate the issue on jsfiddle.net? In the code I posted above it's referencing a 'nav' element. Do you have that element on your page? You're also using a lot on !important commands in your CSS which may cause issues.
Comment for this answer: I've recreated yours here using a mixture of the tutorial code and your code. Hope this helps: http://jsfiddle.net/cHNWe/2/
Comment for this answer: No worries, glad I could help.
Comment for this answer: I guess I wasn't paying enough attention to the tutorial, for that I apologize. But thank you for helping me out here - and for going through the tutorial to find the missing part I forgot!
---- EDIT ----
Apparently, the menu is still open when you load the site. It looks better - but it's still open.
Comment for this answer: That's amazing Billy! It works like a charm. Thank you very much for provinding this fix :)!
|
Title: How to pass object to StatefulWidget while Routing in Flutter?
Tags: flutter
Question: I have MyApp class which is responsible for loading application. I'm loading a Class HomeWidget as home in MyApp.
```void main() => runApp(new myApp());
class myApp extends StatelessWidget{
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'My App',
color: Colors.grey,
home: new HomeWidget(),
theme: new ThemeData(
primarySwatch: Colors.lightBlue,
),
routes: <String, WidgetBuilder>{
"/ItemDetailWidget": (BuildContext context) => new MovieDetailsPage(),
},
);}}
```
HomeWidget contains - Header, ListView, BottomNavigation.
When user taps particular item from ListView, I wanted to show new Widget/Page(ItemDetailWidget) which has all information about that particular item.
So i created ItemDetailWidget which is stateful and it accepts one parameter which is of type Object MyModel suppose. I made it stateful for purpose.
```
How should i add ItemDetailWidget into Routes as i'm passing
parameter to it?
```
I tried using
``` "/ItemDetailWidget": (BuildContext context) => new ItemDetailWidget(),
```
However, It throwing error as "The Constructor return type dynamic that isn't of expected type widget"
& Also how can i pass MyModel object to ItemDetailWidget using Navigator syntax? I'm having onTap() function in ListView.
Navigator.of(context).pushNamed('/widget1');
Here is the accepted answer: This depends on the data you're sending; it sounds like in your case you have a bunch of movie details in a DB (or something), and you want to be able to show details for that movie. What you can do is use a unique identifier for each movie, and put that in the request; this is described more or less in the potential duplicate mentioned in the comments. The flutter stocks example also explains this.
To summarize:
When you push, do a ```pushNamed("moviedetails/${movieUniqueIdentifier}")```.
In your MaterialApp, you can set
routes:
```routes: <String, WidgetBuilder>{
'/': (BuildContext context) => new Movie(movies, _configuration),
'/settings': (BuildContext context) => new MovieSettings(_configuration)
},
```
and:
```onGenerateRoute: (routeSettings) {
if (routeSettings.name.startsWith("movie:") {
// parse out movie, get data, etc
}
}
```
However, this isn't always the easiest way of doing things - say for example your database takes a while to respond and so you want to do a query before and then pass in the result (caching would be a good answer to this but let's ignore that for now =D). I'd still recommend the first method, but there are cases where it doesn't work.
To instead pass an object directly in (as the question actually asks) you can use:
```Navigator.of(context).push(new PageRouteBuilder(pageBuilder:
(context, animation, secondaryAnimation) {
// directly construct the your widget here with whatever object you want
// to pass in.
})
```
Note that this will get quite messy if you have these ```Navigator.of(context).push``` blocks all over your code; I got away from this by using static convenience methods to it such as ```MyNavigator.pushMovie(context, movieObject)``` which would call ```Navigator.of(context)....``` under the hood. (I also subclass Navigator and so do MyNavigator.of(context) instead, but my setup is complicated as it does a bunch of additional custom navigation stuff).
|
Title: while read loop ignoring last line in file
Tags: bash
Question: Reading in a file , ```members_08_14.csv``` which just contains a list of numbers, the while loop is reading each line. For each line, the number is matched against a regex to ensure that it's only numbers and exactly 11 characters long.
```while read card
do
if [[ $card =~ ^[0-9]{11}$ ]]
then
echo "some sql statement with $card" >> temp.sql;
else
echo "Invalid card number in file: $card";
fi
done <registered/members_08_14.csv
```
The interesting thing is, the else is not being executed if the regex does not match. I would expect that either the line would be written to ```temp.sql```, or a line would be printed to ```stdout``` saying the card number is invalid.
The behaviour, however, is more along the lines of either only the ```true``` condition or only the ```false``` condition gets activated for the entire file. Why would this be?
Here's the contents of registered/members_08_14.csv:
```47678009583
47678009585
47678009587
47678009590
476780095905
```
The first lines are valid, the 5th line is invalid.
Output of ```cat -vte registered/members_08_14.csv```
```47678009583$
47678009585$
47678009587$
47678009590$
476780095905$
```
Comment: I have no idea, but it works great on my computer.
Comment: Show output of `cat -vte registered/members_08_14.csv` in your question.
Comment: When I run that code, I get `Invalid card number in file: 476780095905`
Comment: Maybe the last line of your file is missing a trailing newline? If that's the case, `read` will fail and the entire loop iteration won't run.
Comment: What does `set -x` give you for the iteration with the line in question?
Comment: I took the liberty of updating the summary to better reflect the actual problem, in the hope that this helps other folks in the same boat.
Comment: @CharlesDuffy I believe you were correct with a trailing newline. When I added the newline to the end of the file (and then removed it again) it seemed to work. The file did originally come from a Windows machine.
Comment: Well, it works perfectly fine on my bash 4.3-1 on Debian unstable...
Here is the accepted answer: If the last line of your file has no newline on the end, ```read``` will put its content into ```card``` -- but will then exit with a nonzero value. Because ```read``` has exited with a nonzero value in this case, the ```while``` loop will exit without going on to the statement that runs the [email protected].
The easiest fix is to correct the file.
Another approach you can take is to ignore the exit status of ```read``` when it actually populates its destination (and, while at it, to put ```$'\r'``` into IFS, such that ```read``` will ignore the extra characters in DOS newlines):
```while card=; IFS=$' \t\r\n' read -r card || [[ $card ]]; do
if [[ $card =~ ^[0-9]{11}$ ]]
then
echo "some sql statement with $card" >> temp.sql;
else
echo "Invalid card number in file: $card";
fi
done <registered/members_08_14.csv
```
Comment for this answer: Thanks for posting the answer :)
Here is another answer: Perhaps your file is in DOS format that you also get to read carriage returns (```\r```) to the end of the variable. Try to run ```dos2unix file``` or ```sed -i 's|\r||' file```. Another way is to trim out that character after every input through this
```while IFS=$' \t\r\n' read -r card
```
Comment for this answer: @Hypino: But `cat -vte` isn't showing presence of `^M` in each line.
Comment for this answer: If the file were in DOS format, wouldn't _every_ line go through the `else` case?
Comment for this answer: I believe this was it. The file originally came from Windows, and when I added a new line to the end of the file (and then removed it again) it seemed to have worked.
Comment for this answer: @anubhava Sorry for the confusion, the `cat -vte` was from after I made a change as per @CharlesDuffy
Comment for this answer: @CharlesDuffy I'm just guessing. He was not specific which parts of the lines were getting the proper condition or not.
Here is another answer: To read all the lines, regardless of whether they are ended with a new line or not:
```cat "somefile" | { cat ; echo ; } | while read line; do echo $line; done
```
Source : My open source project https://sourceforge.net/projects/command-output-to-html-table/
Comment for this answer: You could of course use `{ cat "somefile"; echo; } | ...` and make the pipeline that much shorter. Or `cat "somefile" <(echo) | ...`
|
Title: TVOS about App layout (Like in iTunes and similar)
Tags: ios;swift;tvos
Question: I have played a lot with my app and I do not understand how to make layout like in for example iTunes (also many apps uses it).
How it is made? It is one big ```CollectionView```, but with special ```Flow``` or it is ```TableView``` with many ```CollectionView```?
Collection headers. In iTunes App if I select item (with adjustImageWhenFocused) under the header then the header will jump up and the item will not overlap the header. It is special magic or it is system behavior and I just do know how to use it?
Below is two screenshots about what I am trying to tell you and example with my app.
In iTunes there are movie preview page. With what type of ```View``` it is made? ```TableView```,```CollectionView``` or just ```ViewController``` with ```ScrollView```?
I have read many sources and looked up demo projects, but nowhere I have found answers for this questions.
Here is the accepted answer: 1) I think it would be a ```stackTemplate``` containing a couple ```collectionList```s.
2) AFAIK the headers "jump up" on their own, no need to prepare anything special.
3) ```productTemplate```?
For examples, see https://github.com/iBaa/PlexConnectApp, ```/TVMLTemplates/Default/Movie_OnDeck.xml``` (1) or ```Movie_PrePlay.xml``` (3).
Or check the gold source: https://developer.apple.com/library/tvos/documentation/LanguagesUtilities/Conceptual/ATV_Template_Guide/StackTemplate.html, plus other Templates.
Comment for this answer: how to add this feature using swift?
Comment for this answer: You want to say that I just need to use TVML instead of writing the app in Swift?
Comment for this answer: Yeah, well, it's one way to re-use all those Apple predefined views - sorry that I missed, that you are looking for a "native" Swift way...
Here is another answer: If you want to use native Swift way it can be achieved in following ways:
You can use table view and have collection view within each cell. I am using same approach to achieve this.
We have focus update delegate from there you can find the focused frame of image view. With the help of focused frame and label frame you can check if they are intersecting or not. Based on that you can move label up and down.
This is native TVML template, in order to achieve in swift you need to create view using tableview and collection view.
|
Title: USB Wireless adapter driver, tplink T2uhp ac600 (mt7610?)
Tags: networking;drivers;wireless;20.04;mediatek
Question: Linux noob here. Fresh installed Ubuntu 20.04. Trying to get my wireless adapter to work (https://www.tp-link.com/us/support/download/archer-t2uhp/#Driver).
I tried tons of suggestions found online but none worked, I'm sorry i didn't document them all but a few i did:
One of the last thing i found was https://www.myria.de/computer/1308-tp-link-archer-t2u-ac600-unter-linux-nutzen , which if google translator doesn't let me down, says driver should be included in kernel 5.0? although I'm not sure how that works.
I also found Problems configuring my usb ac051 wifi adapter, but after running "sudo dkms install mt7610u/1" (in the last part of the answer) i get.
```Building module:
cleaning build area...
'make' all KVER=5.8.0-38-generic...(bad exit status: 2)
ERROR (dkms apport): binary package for mt7610u: 1 not found
Error! Bad return status for module build on kernel: 5.8.0-38-generic (x86_64)
Consult /var/lib/dkms/mt7610u/1/build/make.log for more information.
```
lsusb
```Bus 002 Device 003: ID 0781:5591 SanDisk Corp. Ultra Flair
Bus 002 Device 006: ID 2357:010b TP-Link
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 046d:c534 Logitech, Inc. Unifying Receiver
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
```
thanks
after running as suggested by jeremy
sudo sed -i 's/0105/010b/' /lib/modules/$(uname -r)/kernel/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0u.ko
sudo depmod -a
```seba@Loldserv:~/src$ sudo sed -i 's/0105/010b/' /lib/modules/$(uname -r)/kernel/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0u.ko
seba@Loldserv:~/src$ sudo depmod -a
```
there was no feedback in console and the last command seems to be waiting for another imput. Anyway i tried again the steps in his other post i cited but keep getting the error thats up here.
also checked make.log, content below, not sure if i sould run what is says there,or where to run it.
```DKMS make.log for mt7610u-1 for kernel 5.8.0-38-generic (x86_64)
dom 17 ene 2021 19:43:30 -03
make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.8.0-38-generic/build M=/var/lib/dkms/mt7610u/1/build modules
make[1]: Entering directory '/usr/src/linux-headers-5.8.0-38-generic'
ERROR: Kernel configuration is invalid.
include/generated/autoconf.h or include/config/auto.conf are missing.
Run 'make oldconfig && make prepare' on kernel src to fix it.
make[1]: *** [Makefile:746: include/config/auto.conf] Error 1
make[1]: Leaving directory '/usr/src/linux-headers-5.8.0-38-generic'
make: *** [Makefile:374: modules] Error 2
```
Here is the accepted answer: A quick hack of a fix is
```sudo sed -i 's/0105/010b/' /lib/modules/$(uname -r)/kernel/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0u.ko
sudo depmod -a```
Reboot
This will replace an existing entry in the module alias file
For now do ```sudo apt install --reinstall linux-modules-extra-$(uname -r)```
Reboot
then try ```sudo modprobe mt76x0u``` then do ```echo 2357 010b | sudo tee /sys/bus/usb/drivers/mt76x0u/new_id```
See if it works
Lets try a udev rule
```sudo touch /etc/udev/rules.d/98-wifi.rules
gedit admin:///etc/udev/rules.d/98-wifi.rules```
Paste this into the file
```ACTION=="add", ATTRS{idVendor}=="2357", ATTRS{idProduct}=="010b", RUN+="/sbin/modprobe mt76x0u" RUN+="/bin/sh -c 'echo 2357 010b > /sys/bus/usb/drivers/mt76x0u/new_id'"```
Reboot and see if it works
Comment for this answer: That is good, that means there weren't any errors
Comment for this answer: Ok, I can work on a better fix
Comment for this answer: I don't have one of those devices, so I can't be sure why 5GHz doesn't work. You might want to look at https://ubuntuforums.org/showthread.php?t=2354328&p=13614520&#post13614520 there is a bit there about setting your country code and that may help. Please file a bug report against package linux, see https://help.ubuntu.com/community/ReportingBugs as otherwise those commands will have to be run after every boot unless a udev rule is made or this is fixed upstream in the kernel
Comment for this answer: ty for answering. maybe im missing something. this commands didn't give me any feedback, goinna edit my question with my response.
Comment for this answer: oh i see, but it still not working sry,
Comment for this answer: First of all, thank you so much, the adapter now works and i can connect. The only issue is it wont connect to 5ghz networks, that's why i took so long to reply, was trying to fix that issue by myself (to no avail sadly). I don't want to burden you anymore, so my only question is if it's a known issue with the driver, or i should be able to solve it. Thank you once again
Comment for this answer: No problem, already did the country thing, gonna check that link. That explains why it didn't work after rebooting lol. i will also check the bug report. Ty one again for all your help.
Comment for this answer: i'm sorry i wasn't expecting you would also provide me with the udev rule so i didn't check. I just tried and it worked perfectly. Dude you are really awesome, if there is some way i can show you my gratitude or help you back let me know (maybe start ko-fi page :P)
|
Title: ClearCase Checkin trigger doesn't allow deliver
Tags: version-control;clearcase;clearcase-ucm
Question: I created a preop checkin trigger that checks the comment to make sure it isn't empty. This works just fine.
However, when I do a deliver from the dev stream to the int stream, the trigger stops at the check in process. Is there a way around this? I am assuming that the comments while checking in for a deliver process are blank.
Here is the accepted answer: You could setup a preop trigger on the ```deliver_start``` operation kind (opkind) in order to set an environment variable which would act as a flag.
When that environment variable is set, your original script (the preop checkin one) could simply return true (ie does nothing and allows the checkin to proceed)
Another postop trigger on ```deliver_cancel``` and ```deliver_complete``` opkinds will cancel that environment variable.
See an example of pre and postop trigger on deliver events here.
|
Title: python code problem
Tags: python;google-app-engine
Question: i have this code:
```class Check(webapp.RequestHandler):
def get(self):
user = users.get_current_user()
be = "SELECT * FROM Benutzer ORDER BY date "
c = db.GqlQuery(be)
for x in c:
if x.benutzer == user:
s=1
break
else:
s=2
if s is 0:
self.redirect('/')
```
to check whether the user is registered or not.
but it gives me an error:
```Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 511, in __call__
handler.get(*groups)
File "/Users/zainab_alhaidary/Desktop/الحمد لله/check.py", line 23, in get
if s is 0:
UnboundLocalError: local variable 's' referenced before assignment
```
what should i do???
Comment: Why don't you let sql filter for the user you're checking?
Comment: don't use `is` to compare `ints`. use `==`
Comment: Why are you fetching all users in the table if you only need information about one specific user (the current one)? And your variable naming scheme has some room for improvement, good names are important.
Here is another answer: You want to set ```s``` to 0 before the ```for``` loop starts. If the query returns zero items, your ```for``` loop doesn't loop even once, so ```s``` is undefined.
Also, you should use ```if s == 0:``` instead of ```if s is 0:```. In CPython, they are both equivalent, but you shouldn't rely on the fact. See: the documentation for PyInt_FromLong and "is" operator behaves unexpectedly with integers.
Comment for this answer: Why shouldn't he rely on that fact?
Comment for this answer: I understand what you mean now, I've spent 30 minutes trying to find the docs on "is".
Comment for this answer: @wizard, because it's an implementation detail of one implementation of python. there is jython, pypy, ironpython, unladenswallow... any of those might decide not to cache the small ints. besides the code should **say what you mean**
Comment for this answer: `is` tests that the variables refer to the same object in memory, `==` tests that their values are equal. CPython caches small `int`s (<265 I think) for optimisation, so all references to them point to the same place in memory. This is not true of large `int`s, or guaranteed by the specification.
Here is another answer: You're using 's' before you assign something to it. Add an 's = 0' in the appropriate location.
Here is another answer: Why exactly are you loading all users, then looping through them, just to find one? Use a where clause:
```be = "SELECT * FROM Benutzer WHERE benutzer=:1"
c = db.GqlQuery(be, user)
user_from_db = c.get()
if user_from_db is not None: # found someone
dostuff()
else:
self.redirect('/')
```
Comment for this answer: Wooble: Ok, I've never used AppEngine so I didn't know.
Comment for this answer: I'd vote you up twice if I could, this wasn't his question but it should be his answer.
Here is another answer: Your problem is that if ```c``` is an empty list then the code in the ```for``` loop is never run and ```s``` never gets set, hence the error:
```UnboundLocalError: local variable 's' referenced before assignment
```
What the error is telling you that you're referencing - i.e. using - ```s``` before it has any value - i.e. before a value has been assigned to it.
To fix this you just ensure ```s``` always is assigned a value:
```s = 0
for x in c:
if x.benutzer == user:
s = 1
break
else:
s = 2
```
Comment for this answer: @gnibbler - you're right; shouldn't have suggested something I don't use myself. Have changed to suggested solution.
Comment for this answer: the `else` clause of the `for` loop gets executed if the for loop isn't terminated by a `break`, so this won't work
Here is another answer: In the case that ```c``` is empty the ```if``` statement in the loop never gets executed
you should set ```s=0``` before the ```for``` loop
Here is another answer: I don't know why you are doing this, but if I understand your code correctly, you have ```s=1``` when ```x.benutzer == user```, and ```s=2``` otherwise (shouldn't this be ```s=0``` if you are going to check against 0?).
```for x in c:
if x.benutzer == user:
s=1
break
else:
s=2
if s is 0:
self.redirect('/')
```
Anyway, here's my solution:
```if not any(x.benutzer == user for x in c):
self.redirect('/')
```
Here is another answer: Define ```s``` before to assign it a value (also, change the test on ```s```):
```user = users.get_current_user()
be = "SELECT * FROM Benutzer ORDER BY date "
c = db.GqlQuery(be)
s=0 # <- init s here
for x in c:
if x.benutzer == user:
s=1
break
else:
s=2
if s == 0: # <- change test on s
self.redirect('/')
```
Comment for this answer: How do I correctly format the code block (with colour, ...) ?
Comment for this answer: Don't use ``, just indent 4 space. The syntax highlighting is automatic.
Comment for this answer: Indent it by 4 spaces, or mark it and use the code button to do it.
|
Title: converting data string to time using Linq
Tags: c#;asp.net;linq
Question: How can I convert the following into times, knowing that the values are the number of minutes.
350-659, 1640-2119, 2880-3479;
```The output id like is
M 5:50am - 10:59am
T 3:20am - 10:59am
W 12:00am - 9:59am
etc....
Ranges -
Mon= 0-1439
Tue = 1440-2879
Wed = 2880 - 4319
Thurs = 4321 - 5759
Fri = 5760 - 7199
Sat = 7200 - 8639
Sun = 8640 - 10079
```
What I have so far is
```var days = new[] { 24, 48, 72, 96, 120, 144, 168 };
var numbers = Enumerable.Range(1,7);
var hours = days.ToDictionary(x => (double)x/24, i => (int)I*60);
which outputs
Key Value
1 1440
2 2880
3 4320
4 5760
5 7200
6 8640
7 10080
```
Comment: *convert the following into times* - times *when* ? There isn't really any such thing as a Time in C# without a date that comes with it. OK, so 540 is "9am on Monday", but *what week of what year* do you want it to be?
Comment: Why the imposition of "using LINQ"? LINQ is for querying things, and doesn't *really* make sense in this context, which is "parse a string to a list of ints and interpret them as the number of minutes since midnight on monday". Is this an academic exercise where you *must* use LINQ in some way, however tenuous? Are you aware that 1019 minutes since midnight is 4:59 pm and not 5pm? Are we supposed to factor for this in an answer?
Comment: Work backwards, Write a function that does the parsing task you want. Then write a function that enumerates the tokens from the source input. Then write code that passes the tokens to the parsing function. Then examine how to convert that code, most likely a foreach loop, into a linq projection. In other words take the task apart into steps, and work each step individually.
Comment: What do you want the output to look like? Maybe give an example of a couple of input values and the expected output values. For example: "0065 should give an output of Monday 1:05 AM"
Comment: I guess somebody, some time ago, thought it would be a good idea to do this. I think you would also need to know what day of the week it is.
Comment: Also, your mappings are incorrect. 1440 is Tuesday at 00:00, and same for the rest.
Comment: One letter for day won't cut it. There are two Ts and two Ss. It seems as though you are coming up with the requirements as you go.
Here is the accepted answer: I kinda don't get the question at all, but taking everything you've said at face value:
```var times = "350-659, 1640-2119, 2880-3479;"
.Split(',') //split to string pairs like "350-659"
.Select(s => s.Split('-').Select(x => int.Parse(x)).ToArray()) //split stringpairs to two strings like "350" and "659", then parse to ints and store as an array
.Select(sa => new { //turn array ints into dates
F = new DateTime(0).AddMinutes(sa[0]), //date 0 i.e. jan 1 0001 was a monday. add minutes to it to get a time and day
T = new DateTime(0).AddMinutes(sa[1] + 1) //add 1 to the end minute otherwise 659 is 10:59pm and you want 11:00am
}
)
.Select(t =>
$"{($"{t.F:ddd}"[0])} {t.F:hh':'mmtt} - {t.T:hh':'mmtt}" //format the date to a day name and pull the first character, plus format the dates to hh:mmtt format (eg 09:00AM)
);
Console.Write(string.Join("\r\n", times));
```
If you actually want to work with these things in a sensible way I recommend you stop sooner than the final Select, which stringifies them, and work with the anonymous type ```t``` that contains a pair of datetimes
The only thing about this output that doesn't match the spec, is that the AM/PM are uppercase. If that bothers you, consider:
```$"{t.F:ddd}"[0] + ($" {t.F:hh':'mmtt} - {t.T:hh':'mmtt}").ToLower()
```
Comment for this answer: That's not a linq thing, it's string interpolation. `var x = "Hello"; var y = $"{x} world";` is a neater way of writing `var x = "Hello"; var y = string.Format("{0} world", x);` though you can't reuse parameters like in string.Format; if you want something in an interpolated string twice you have to write it twice so eg `var y = $"{(i < 1 ? "hello":"goodbye")} {(i < 1 ? "hello":"goodbye")} world"` might not be as neat in your opinion as `var y = string.Format("{0} {0} world", (i < 1 ? "hello":"goodbye")`.
Comment for this answer: Probably the neatest thing to do if interpolating is to make variables: `var helloOrGoodbye = (i < 1 ? "hello":"goodbye"); var y = $"{helloOrGoodbye} {helloOrGoodbye} world"`. Just like with string.Format, a placeholder can accept a format string after a colon, so `var i = 255; var y = $"Hex is: {i:X}"` will format 255 as a hex string, same as calling `i.ToString("X")` or `string.Format("Hex is: {0:X}", i);` would
Comment for this answer: I added some comments to explain the linq
Comment for this answer: Thank you, if you don't mind me asking, what kind of expression is it that you used in the select statement with the $, I'm still new to LINQ
Here is another answer: An interval of time (as opposed to an absolute point in time) is expressed as a TimeSpan. In this case, you'd have one TimeSpan that represents the offset (from the beginning of the week) until the starting time, then another TimeSpan that represents the offset to the end time.
Here's how to convert your string into a series of TimeSpans.
```var input = @"540-1019;1980-2459;3420-3899;4860-5339;6300-6779";
var times = input
.Split(';')
.Select(item => item.Split('-'))
.Select(pair => new
{
StartTime = new TimeSpan(hours: 0, minutes: int.Parse(pair[0]), seconds: 0),
EndTime = new TimeSpan(hours: 0, minutes: int.Parse(pair[1]), seconds: 0)
})
.ToList();
foreach (var time in times)
{
Console.WriteLine
(
@"Day: {0} Start: {1:h\:mm} End: {2:h\:mm}",
time.StartTime.Days,
time.StartTime,
time.EndTime
);
}
```
Output:
```Day: 0 Start: 9:00 End: 16:59
Day: 1 Start: 9:00 End: 16:59
Day: 2 Start: 9:00 End: 16:59
Day: 3 Start: 9:00 End: 16:59
Day: 4 Start: 9:00 End: 16:59
```
You can of course choose to format the TimeSpan in any way you want using the appropriate format string.
Here is another answer: You could get the current monday of the week and add the minutes. I don't think you can do this in Linq directly.
```//your timestamp
int minutes = 2345;
//get the day of week (sunday = 0)
int weekday = (int)DateTime.Now.DayOfWeek - 1;
if (DateTime.Now.DayOfWeek == DayOfWeek.Sunday)
weekday = 6;
//get the first day of this week
DateTime firstDayOfWeek = DateTime.Now.AddDays(-1 * weekday);
//add the number of minutes
DateTime date = firstDayOfWeek.Date.AddMinutes(minutes);
```
|
Title: Windows Media Player cannot play MPEG-TS file created in Android.
Tags: android;mpeg2-ts;stagefright
Question: I tested the StageFright record sample (frameworks/base/cmds/stagefright/record) to create a mpeg2 TS file. While it can be played on Android default Media player, it cannot be played in Windows Media Player or MPlayer. Any suggestions?
Note that I modified the original record sample source to create MPEG-TS file instead of MP4 file.
Comment: No. I just want to create ts file compatible with wmp, vlc, mplayer...
Comment: Is it working with other players such as VLC? What exactly do you want to achieve?
Here is another answer: Which codec you used to create the mpeg2 TS file. May be difference in the codec used is the problem.
Comment for this answer: I used AVC, the codec seems ok, for if I created the mp4 file with the same codec (AVC), it can be played on VLC, WMP etc.
Comment for this answer: Android has some problems with .ts files maybe during creating that file there were some problems
|
Title: Karaf add additional property to existing config file
Tags: osgi;apache-karaf;karaf;blueprint-osgi
Question: I have a bundle which uses a configuration file ```org.jemz.karaf.tutorial.hello.service.config.cfg``` with one property:
```org.jemz.karaf.tutorial.hello.service.msg="I am a HelloServiceConfig!!"
```
My blueprint for using ConfigAdmin is like:
```<cm:property-placeholder persistent-id="org.jemz.karaf.tutorial.hello.service.config" update-strategy="reload" >
<cm:default-properties>
<cm:property name="org.jemz.karaf.tutorial.hello.service.msg" value="Hello World!"/>
</cm:default-properties>
</cm:property-placeholder>
<bean id="hello-service-config"
class="org.jemz.karaf.tutorial.hello.service.config.internal.HelloServiceConfig"
init-method="startup"
destroy-method="shutdown">
<property name="helloServiceConfiguration">
<props>
<prop key="org.jemz.karaf.tutorial.hello.service.msg" value="${org.jemz.karaf.tutorial.hello.service.msg}"/>
</props>
</property>
</bean>
<service ref="hello-service-config" interface="org.jemz.karaf.tutorial.hello.service.IHelloService" />
```
This works fine as long as I can change the value of the property and the bundle automatically updates the property.
I am wondering if there's any way of adding a new property to my config file without having to change the blueprint (which involves compile/package again).Of course my bundle should be ready to handle new properties.
Not sure if this makes sense in OSGi. Can anyone give me a hint of how to dynamically add new properties to an existing configuration file and make them available in ConfigAdmin?
Here is the accepted answer: @yodamad showed me that properties were being updated in my ConfigurationAdmin service, but unfortunately my bundel was not receiving the new properties because in the bean definition I was just using a fixed property.
Finally in order to get new properties from config files I have changed my blueprint definition as follows:
```<cm:property-placeholder persistent-id="org.jemz.karaf.tutorial.hello.service.config" update-strategy="reload" >
</cm:property-placeholder>
<bean id="hello-service-config"
class="org.jemz.karaf.tutorial.hello.service.config.internal.HelloServiceConfig"
init-method="startup"
destroy-method="shutdown">
</bean>
<service ref="hello-service-config" interface="org.jemz.karaf.tutorial.hello.service.IHelloService" />
<reference id="hello-service-config-admin" interface="org.osgi.service.cm.ConfigurationAdmin"
availability="optional">
<reference-listener bind-method="setConfigAdmin"
unbind-method="unsetConfigAdmin">
<ref component-id="hello-service-config"/>
</reference-listener>
</reference>
```
I have a reference to the ConfigurationAdmin service so, now I can check all properties for my bundle as:
```public void setConfigAdmin(ConfigurationAdmin cfgAdmin) {
this.cfgAdmin = cfgAdmin;
try {
Configuration cfg = cfgAdmin.getConfiguration("org.jemz.karaf.tutorial.hello.service.config");
Dictionary<String, Object> properties = cfg.getProperties();
Enumeration<String> en = properties.keys();
while(en.hasMoreElements()) {
String key = en.nextElement();
System.out.println("KEY: " + key + " VAL: " + properties.get(key));
}
} catch (IOException e) {
e.printStackTrace();
}
}
```
As my 'update-strategy' is ```reload``` I can get updated whenever a new ConfigurationAdmin service is registered.
Any comment about my approach is welcome!
Comment for this answer: You can also use have a class that `implements ConfigurationListener, BundleContextAware` interfaces. Then you'll receive all events regarding configuration changes, thus you can process whatever you want in your application
Here is another answer: You can add a property in karaf shell :
1/ ```config:edit org.jemz.karaf.tutorial.hello.service.config```
2/ ```config:propset <key> <value>```
3/ finally ```config:update``` to save change to file.
If you want to do it programatically, check karaf source to see the implementation
Comment for this answer: Hi Jorge, if you are using SpringDM I can give you sample of configuration to be able to have multiple configuration files managed for one bundle
Comment for this answer: Thanks for the answer. Your solution works, the properties are stored in the Properties place holder and I can list them. The problem is that my bundle cannot get those new properties. I think it is related to the bean definition I have only set one property to be injected "org.jemz,karaf.tutorial.service.msg" thus I cannot get more properties in my `setHelloServiceConfiguration` method. I have figured out another way of getting new properties from my config files I will post it in my answer. Anyway I will give you an upvote for the idea
|
Title: php validate email address based on the domain name
Tags: php;regex;if-statement
Question: I need to filter some email address based on the domain name :
Basically if the domain name is yahoo-inc.com, facebook.com, baboo.com .. (and a few others) the function should do something and if the domain is different it should do something else .
The only way I know to do this is to use a pattern/regex with preg_match_all and create cases/conditions for each balcklisted domain (e.g if domain = yahoo-inc) do this elseif (domain == facebook.com ) do this ... etc but I need to know if there is a more simple/concis way to include all the domains that I want to filter in a single variable/array and then apply only 2 conditions (e.g if email is in black list {do something } else {do something else}
Here is the accepted answer: Extract the domain portion (i.e. everything after the last '@'), down case it, and then use ```in_array``` to check whether it's in your blacklist:
```$blacklist = array('yahoo-inc.com', 'facebook.com', ...);
if (in_array($domain, $blacklist)) {
// bad domain
} else {
// good domain
}
```
Here is another answer: Well here's a very simple way to do this, a VALID email address should only ever contain a single ```@``` symbol, so aslong as it validation you can just explode the string by ```@``` and collect the second segment.
Example:
```if (filter_var($user_email, FILTER_VALIDATE_EMAIL))
{
//Valid Email:
$parts = explode("@",$user_email);
/*
* You may want to use in_array if you already have a compiled array
* The switch statement is mainly used to show visually the check.
*/
switch(strtolower($parts[1]))
{
case 'facebook.com':
case 'gmail.com':
case 'googlemail.com':
//Do Something
break;
default:
//Do something else
break;
}
}
```
Comment for this answer: This would require a case for each blacklisted domain, it's better to store all the blacklisted domains in one array and then compare using in_array.
Here is another answer: Adding on to @Alnitak here is the full code to do what you need
```$domain = explode("@", $emailAddress);
$domain = $domain[(count($domain)-1)];
$blacklist = array('yahoo-inc.com', 'facebook.com', ...);
if (in_array($domain, $blacklist)) {
// bad domain
} else {
// good domain
}
```
Comment for this answer: technically you should take the last element of `$domain`, which is not guaranteed to be `[1]`. `"A@B"@C` is a legal e-mail address.
|
Title: Flutter: Is it neccessary to remove testdevice id from Firebase admob before publishing app?
Tags: firebase;flutter;admob
Question: I'm using Firebase Admob in my Flutter application, I guess I need to remove the TestDevice id from the code. If yes then:
I'm a bit confused that if I need to remove the TestDevice id before publishing my app than what thing/code should replace it?
When I remove the TestDevice id from the app than it shows an exception.
Here is the accepted answer: You don't need to remove test device id.
Using test device id means your ads will be displayed as test ads in your device which is what Google suggest you, never run your own ads in your device, or your account may be suspended on doing so cause you'll be creating fake impressions.
So, the bottom line is to keep using test device id no matter if it is for development or production. Only your test device will see test ads. End users will see real ad.
|
Title: ViewPager Glide
Tags: android;android-viewpager;android-glide
Question: I have ViewPager(Slide) and have 3 image. There are three images that are downloaded over the Internet. If I change the picture to another server on the server, the link remains the same, but the picture in the application does not change and remains the same one that was in the cache
```public class ViewPagerAdapter extends PagerAdapter {
private Context context;
private LayoutInflater layoutInflater;
private String [] image = {"http://rgho.st/7hDcbyT2F/image.png",
"http://guid-korenovsk.my1.ru/logos.png",
"https://4.bp.blogspot.com/-JKogH2VCCoY/V_aZWCFsmtI/AAAAAAAABAA/Lu6D13VXGSMMnYFO8T8-pKDeqbkHhNRAwCLcB/s320/VideoThumbail.PNG"};
public ViewPagerAdapter(Context context) {
this.context = context;
}
@Override
public int getCount() {
return image.length;
}
@Override
public boolean isViewFromObject(View view, Object object) {
return view == object;
}
@Override
public Object instantiateItem(ViewGroup container, int position) {
layoutInflater = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View view = layoutInflater.inflate(R.layout.bulding_layout, null);
ImageView imageView = (ImageView)view.findViewById(R.id.imageViewPager);
Glide.with(context)
.load(image[position])
.diskCacheStrategy(DiskCacheStrategy.SOURCE)
.into(imageView);
ViewPager vp = (ViewPager)container;
vp.addView(view,0);
return view;
}
@Override
public void destroyItem(ViewGroup container, int position, Object object) {
ViewPager vp = (ViewPager)container;
View view = (View)object;
vp.removeView(view);
}
}
public class MyTimerTask extends TimerTask {
@Override
public void run() {
getActivity().runOnUiThread(new Runnable() {
@Override
public void run() {
if(viewPagerAds.getCurrentItem() == 0){
viewPagerAds.setCurrentItem(1);
}else if(viewPagerAds.getCurrentItem() == 1){
viewPagerAds.setCurrentItem(2);
}else viewPagerAds.setCurrentItem(0);
}
});
}
}
```
Here is another answer: You can use ```DiskCacheStrategy.NONE``` on this ```Glide``` instance to avoid the caching of the image. In this case, ```Glide``` will download the image again everytime. To have a more optimized version, you have to check the method ```signature()``` and use a custom signature when the server invalidates its data.
Link to ```Glide``` wiki about cache invalidation: https://github.com/bumptech/glide/wiki/Caching-and-Cache-Invalidation
|
Title: Speed up background thread Android
Tags: java;android
Question: My Android application has taken me to a point where I need to run a brute-force method in a background thread. I need this to run incredibly fast.
In this context, the brute force system is trying to determine if three numbers can be used to mathematically solve for one number.
Example: num1 = 1 num2 = 2 num3 = 3. It tries all mathematical operations until it discovers a correct equation that equals a preset number.
My current code is already running on a background thread, but it is SOOOOOO slow.
Here it is:
```new Thread(new Runnable() {
@Override
public void run() {
Expression expressionToTest = new Expression();
double result;
while (expressionToTest.calculate() != number) {
expressionString = bruteForcer.computeNextCombination();
Expression expression = new Expression(expressionString);
System.out.println("Trying: " + expressionString);
result = expression.calculate();
System.out.println("Expression: " + String.valueOf(result));
expressionToTest = expression;
}
}
}).start();
}
```
It is probably not the best option in the world to brute force in Java, but is what I must do. No other solutions have worked for me.
Taking into consideration what Graziano said, I came up with this code:
```new Thread(new Runnable() {
@Override
public void run() {
// Allocate the Expression object, using the 3 parameter in input
// The allocation already calculates the first result
Expression expression = new Expression("");
while (expression.calculate() != 3.0) {
// Make a step forward on operators, and calculate the result
expressionString = bruteForcer.computeNextCombination();
expression.setExpressionString(expressionString);
// No new allocations into this "while" cycle!
// You should pay attention to not allocate new variables during
// any step and calculation also into Expression object.
// All the allocations should be done into the constructor, that is called outside.
System.out.println("Trying: " + expressionString);
System.out.println("Expression" + String.valueOf(expression.calculate()));
}
}
}).start();
```
If you would care to look at this and comment what you think, that would be much appreciated!
Per request of @Graziano: Mathparser Expression object
I cannot really make any major modifications to this code... =(
Problem solved.
Comment: Thanks guys. The Expression type is from MxParser (http://mathparser.org/). It is what validates the equation after the brute-force class generates the combination.
Comment: The only character sets the brute-force algorithm takes into account are: +-()*^/sqrt. Sqrt is the square root function.
Comment: @Graziano could you give me an example of that?
Comment: As a tip, you could speed up the thread execution by avoiding memory allocations. Try to allocate out of the while cycle and reuse the same object for all the iterations. Try also to remove the System.out.println lines and repeat the test in order to verify that they don't impact the performances too much.
Comment: Also you can change the priority of the thread to a higher one using setPriority(). You can check out [this for more details](https://developer.android.com/reference/android/os/Process#THREAD_PRIORITY_URGENT_DISPLAY)
Comment: Where does the `Expression` type come from? Given that you're working with a very limited set of operations (how big depends on what's included in "all operations"), it's probably better to code specific code to try those combinations than to use a general-purpose evaluator. That would remove the string generation and parsing time from the loop which is probably significant and also massively reduce memory pressure (and thus garbage collection time).
Comment: While you're running this on a thread, it would likely help to parallelize the attempts such that you can try multiple [email protected].
Here is another answer: You should find a way to write a code like the following one:
```new Thread(new Runnable() {
@Override
public void run() {
double rightResult = YOUR RESULT;
// Allocate the Expression object, using the 3 parameter in input
// The allocation already calculates the first result
Expression expression = new Expression(double num1, double num2, double num3);
while (expression.result != rightResult) {
// Make a step forward on operators, and calculate the result
expression.computeNextCombination();
// No new allocations into this "while" cycle!
// You should pay attention to not allocate new variables during
// any step and calculation also into Expression object.
// All the allocations should be done into the constructor, that is called outside.
System.out.print("Trying: " + expression.toString + " = " + expression.result);
}
}
}).start();
```
You should avoid to allocate any variable or object during any step and calculation also in Expression object. All the allocation should be done into the ```Expression``` constructor.
You can increment the performances of the thread also by setting a higher priority, like @mohammed-ahmed rightly wrote, but if you choose a higher priority you might have impacts on the user interface.
Comment for this answer: STILL SLOOWWWWW! =(
Comment for this answer: I believe I have found the issue... Running expression.calculate in the while loop logic is slowing it down. Im going to find a way to replace this somehow and get back to you.
Comment for this answer: I was able to replace the .calculate function. My code now runs super-fast!
Comment for this answer: Please post the code of your Expression object, so that we can make some checks
|
Title: ASP.NET applications scalability best practices guide
Tags: c#;.net;asp.net;scalability
Question: Is there are a guide published by microsoft or somebody else about the best practices for creating scalable web applications? like patterns to use and how to do data access.
Here is the accepted answer: The best advice I can give is to do these three things (roughly in order):
Avoid unnecessary postbacks
Avoid excessive viewstate
Spend your time optimizing your database
Comment for this answer: Nothing wrong with it, as long you fit it in the points above: does it cause extra postbacks - does it cause viewstate (hint: viewstate is independent of data source), and can you optimize your db code when the time comes (my experience with linq is that you usually can).
Comment for this answer: Coming back to this years later, I'll add: **4. Avoid unnecessary memory allocations.**
Comment for this answer: What about using linqdatasources to bind to controls, is that bad or good?
Here is another answer: This is an excellent article on the subject of scale: http://msdn.microsoft.com/en-us/library/bb924375.aspx Which basically discusses the issues of load balancing, session affinity and caching all of which become important in any scaling discussion. If you have a specific question after that, let us know.
Here is another answer: I'd take a read through the MVC Storefront series. It's ASP.NET MVC based, but demonstrates a nice approach for creating a loosely-coupled, well architected website. You could easily apply most of the principles to a webforms site as necessary (though I'd recommend going with MVC if yo have the choice...)
Here is another answer: The Microsoft Patterns & Practices group is a good one stop shopping ground for that.
Edit: Here is their guide specifically on Scalability and Performance. Chapter 6 includes information specific to ASP.NET Performance and Chapter 17 includes information about Tuning.
Comment for this answer: Specifically chapter 6 (http://msdn.microsoft.com/en-us/library/ms998549.aspx) and 17 (http://msdn.microsoft.com/en-us/library/ms998583.aspx) for ASP.Net.
Comment for this answer: @adrianbanks - Edited the main answer with you specific links. Thanks.
|
Title: Initializing a LookupMap with near-sdk-rust
Tags: rust;webassembly;smartcontracts;nearprotocol
Question: I am currently developing a contract where I want to make use of a LookupMap, however it's not clear to me how to initialize it. Here is the code:
```// Just a structyre
pub struct Gift {
url: String,
n_tokens_required: usize,
current_tokens: usize,
}
// Main contract code
#[near_bindgen]
#[derive(BorshDeserialize, BorshSerialize)]
pub struct Voting { // TODO: Rename this class
pub gifts: LookupMap<String, Vector<Gift>>,
pub contract_owner: String,
}
impl Default for Voting {
fn default() -> Self {
Voting {gifts: LookupMap181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16new(), contract_owner: env181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16urrent_account_id()}
}
}
```
So I want the gifts attribute of my contract to be a LookupMap with the signature ```LookupMap<String, Vector<Gift>>```. How can I initialize it on my default function implementation?
When I try to do ```LookupMap181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16new()```, it says that I need a parameter key_prefix, with the trait IntoStorageKey, however it's not clear to me what this parameter actually is.
Could anyone help me understand this better?
Thank you!
Here is the accepted answer: You will find your answer on this page https://www.near-sdk.io/contract-structure/collections
|
Title: How To Get State of Switch If No Match in React ReactRouter
Tags: reactjs;react-router
Question: I'm using the ReactRouter and want to do something like set state when the RouteNotFound component is hit (last one means no others above matched).
my ```this.handler``` does a setState so that I can tell that it was called. That of course gives me an error saying "Cannot update during an existing state transition such as in a render".
Is there a way I can set my state to tell me which (I really want to know the last one) of the switch statements go executed?
```render() {
return (
<div>
<Switch>
<Route exact path="/" component={Home}/>
<Route exact path="/p1" component={p1}/>
<Route exact path="/p2" component={p2}/>
<RouteNotFound action={this.handler} ></RouteNotFound>
</Switch>
</div>
);
}
```
Comment: What you are looking for is the onEnter functionality of react-router which in react-router-v4 can be acieved through lifecycle methods. Check the duplicate question for implementation
Here is the accepted answer: Here we go:
not-found.js:
```import React from 'react';
import { Route } from 'react-router-dom';
const RouteNoteFound = ({ action }) => (
<Route
render={props => (
<NotFoundPageComponent action={action} />
)}
/>
);
export default RouteNoteFound;
class NotFoundPageComponent extends React.Component {
constructor(props) {
super(props);
}
componentWillMount() {
this.props.action()
}
render() {
return (
<div>404 not found</div>
)
}
}
```
and in index.js:
```handler = () => {
alert("alert from app.js but run in not found page")
}
render() {
return (
<BrowserRouter>
<Switch>
<Route exact path="/" component={Page} />
<Route path="/page" component={Page} />
<RouteNoteFound action={this.handler} />
</Switch>
</BrowserRouter>
);
}
```
here is DEMO in stackblitz.
Comment for this answer: in index.js, I want to push that "state" up to its parent and then again up to its parent. I think I want to set state in the action handler and then in ComponentDidUpdate, send the state up to parent. When I do that I get into recursion problem.
Here is another answer: Try this.
Define the not found route like this:
```<Route render={props => <RouteNotFound action={this.handler} />}></Route>
```
Because, as per the Doc:
```
A ```<Route>``` with no path prop or a ```<Redirect>``` with no from prop will
always match the current location.
```
Check the Doc example for "No Path Match".
Now use ```componentDidMount``` lifecycle method in RouteNotFound component, whenever RouteNotFound will get rendered, ```componentDidMount``` will be called. Call the parent function from ```componentDidMount```, like this:
```componentDidMount(){
this.props.action();
}
```
Now do setState inside ```action``` in parent component.
Comment for this answer: not sure, why it is throwing error, are you using `()` with `this.handler` ? because it should work. It is working when doing setState inside `componentWillMount`?
Comment for this answer: The problem still exists in {this.handler}. I gather I can not do a state update inside the render function. Warning: Cannot update during an existing state transition (such as within `render` or another component's constructor). Render methods should be a pure function of props and state; constructor side-effects are an anti-pattern, but can be moved to `componentWillMount`.
Comment for this answer: what are you expection is in the {this.handler}. I've got the setState(...) in that which is causing the error I listed in the previous comment
|
Title: Can't get JPA with mysql running
Tags: hibernate;tomcat;jpa;maven;jsf-2
Question: I am trying to setup a project using CDI-weld, JSF 2-mojarra, RichFaces 4, JPA-hibernate, Bean-Validation-hibernate, maven 2, Tomcat 7 and Eclipse Juno.
I thought it would be interesting to feel out the synergy of the latest stuff (except for maven, to lazy for getting into maven 3)
I got everything working like a charm except for the last piece in the puzzle, adding JPA. There's a lot of information on how to set it up and after trying for a while now I just feel very confused. I have it narrowed down to this basically:
(from this post: what dependencies my project should have if I'm using JPA in Hibernate?)
you need:
slf4j-jdk14 and hibernate-entitymanager as artifacts and that's it? I tried a bunch of different setups but I got nothing of it to work. It just won't generate the tables.
(For now i'm trying with these artifacts and I figured i'll worry about versions later)
``` <dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>3.5.1-Final</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<!-- version 1.5.8 is the latest version that will work with the slf4j-api
that's currently bundled with hibernate-parent -->
<version>1.5.8</version>
</dependency>
```
persistance.xml is located here: src - main - wewbapp - WEB-INF - classes - META-INF - persistance.xml.
Tried a bunch of locations with no luck.
Content is as follows:
```<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://svn.apache.org/repos/asf/geronimo/components/geronimo-schema-javaee_6/trunk/src/main/xsd/persistence_2_0.xsd"
version="2.0">
<persistence-unit name="pu">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>com.test.User</class>
<properties>
<!-- Auto detect annotation model classes -->
<property name="hibernate.archive.autodetection" value="class" />
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect" />
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="false" />
<property name="hibernate.format_sql" value="true" />
<property name="hibernate.connection.username" value="root" />
<property name="hibernate.connection.password" value="1111" />
<property name="hibernate.connection.url" value="jdbc:mysql://localhost/woot" />
</properties>
</persistence-unit>
</persistence>
```
Borrowed test Class:
```/**
* User account that is required for logging in and posting bookmarks.
*
* @author Andy Gibson
*
*/
@Entity
@Table(name = "USERS")
public class User {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
@Column(length = 24)
private String username;
@Column(length = 24,name="userPassword")
private String password;
@Column(length = 24)
private String firstName;
@Column(length = 24)
private String lastName;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
}
```
Tomcat log:
```apr 21, 2012 10:58:54 EM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: E:\Programming\Setup\Glassfish\jdk7\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;E:\Programming\Setup\Glassfish\jdk7\jre\bin;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files\Microsoft Windows Performance Toolkit\;E:\Programming\Setup\Glassfish\jdk7\bin;C:\Program Files (x86)\Groovy\Groovy-1.8.0\bin;C:\Program Files (x86)\Calibre2\;E:\Programming\Setup\apache-maven-2.2.1\bin;E:\Program\SVN\bin;E:\Programming\grails-2.0.0\\bin;.
apr 21, 2012 10:58:54 EM org.apache.tomcat.util.digester.SetPropertiesRule begin
WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:woot' did not find a matching property.
apr 21, 2012 10:58:54 EM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-8080"]
apr 21, 2012 10:58:54 EM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["ajp-bio-8009"]
apr 21, 2012 10:58:54 EM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 900 ms
apr 21, 2012 10:58:54 EM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
apr 21, 2012 10:58:54 EM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.22
apr 21, 2012 10:58:55 EM org.apache.catalina.core.StandardContext addApplicationListener
INFO: The listener "com.sun.faces.config.ConfigureListener" is already configured for this context. The duplicate definition has been ignored.
apr 21, 2012 10:58:55 EM org.jboss.weld.bootstrap.WeldBootstrap <clinit>
INFO: WELD-000900 1.1.5 (Final)
apr 21, 2012 10:58:55 EM org.jboss.weld.bootstrap.WeldBootstrap startContainer
INFO: WELD-000101 Transactional services not available. Injection of @Inject UserTransaction not available. Transactional observers will be invoked synchronously.
apr 21, 2012 10:58:56 EM org.jboss.weld.environment.tomcat7.Tomcat7Container initialize
INFO: Tomcat 7 detected, CDI injection will be available in Servlets and Filters. Injection into Listeners is not supported
apr 21, 2012 10:58:56 EM org.jboss.interceptor.util.InterceptionTypeRegistry <clinit>
WARNING: Class 'javax.ejb.PostActivate' not found, interception based on it is not enabled
apr 21, 2012 10:58:56 EM org.jboss.interceptor.util.InterceptionTypeRegistry <clinit>
WARNING: Class 'javax.ejb.PrePassivate' not found, interception based on it is not enabled
apr 21, 2012 10:58:56 EM com.sun.faces.config.ConfigureListener contextInitialized
INFO: Initializing Mojarra 2.1.7 (SNAPSHOT 20120206) for context '/rdwebref'
apr 21, 2012 10:58:58 EM com.sun.faces.spi.InjectionProviderFactory createInstance
INFO: JSF1048: PostConstruct/PreDestroy annotations present. ManagedBeans methods marked with these annotations will have said annotations processed.
apr 21, 2012 10:58:59 EM org.hibernate.validator.util.Version <clinit>
INFO: Hibernate Validator 4.0.2.GA
apr 21, 2012 10:58:59 EM org.hibernate.validator.engine.resolver.DefaultTraversableResolver detectJPA
INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver.
apr 21, 2012 10:59:00 EM org.hibernate.validator.engine.resolver.DefaultTraversableResolver detectJPA
INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver.
apr 21, 2012 10:59:00 EM org.richfaces.cache.CacheManager getCacheFactory
INFO: Selected fallback cache factory
apr 21, 2012 10:59:00 EM org.richfaces.cache.lru.LRUMapCacheFactory createCache
INFO: Creating LRUMap cache instance using parameters: {javax.servlet.jsp.jstl.fmt.localizationContext=resources.application, javax.faces.STATE_SAVING_METHOD=client, javax.faces.DEFAULT_SUFFIX=.xhtml}
apr 21, 2012 10:59:00 EM org.richfaces.cache.lru.LRUMapCacheFactory createCache
INFO: Creating LRUMap cache instance of 512 items capacity
apr 21, 2012 10:59:00 EM org.richfaces.application.InitializationListener onStart
INFO: RichFaces Core Implementation by JBoss by Red Hat, version v.4.2.0.Final
apr 21, 2012 10:59:00 EM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-bio-8080"]
apr 21, 2012 10:59:00 EM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["ajp-bio-8009"]
apr 21, 2012 10:59:00 EM org.apache.catalina.startup.Catalina start
INFO: Server startup in 6130 ms
```
I did create the database called "woot" and password and username is correct. I have gotten mysql to work with Grails 2 previously.
Hope someone can help me out here! cheers
Comment: Do you create an EntityManagerFactory? That should generate some error messages if something is wrong.
Here is another answer: Basically, there are two ways of using JPA:
In an EJB3 Container, such as Apache Geronimo, JBoss AS, Glassfish, etc. Tomcat is just a Web Container, so this option is unavailable for it.
Using in this mode has two advantages:
a. If you use Session Beans, the Entity Manager can be injected by the EJB framework, no Weld required;
b. The transaction control is done automatically by the EJB Containter (JTA).
The so-called "Resource Local" mode. In this mode you have to create the EntityManagerFactory on your own, as well as taking care of all resources, including:
The EntityManager
Transactions
This is the most common way to use JPA if you are not dealing with an EJB Container.
In order to avoid boilerplate code and to make things a lot easier, an Injection Manager is advised. Spring, Guice, Pico are great choices, but Weld is good too, and slightly better since it implements CDI, the Java's standard for injection.
My best bet is to look for more info about how to use Weld to inject the EntityManager in your Tomcat applications. I found this link with a long discussion about it:
https://community.jboss.org/thread/179355
In short: use Seam 3.
Good luck
Comment for this answer: Well my problem is I can't get it to generate the tables, maybe this never came through
|
Title: Angular Material Scrollbar on Card Content Enclosed in a Tab
Tags: html;css;angular-material;scrollbar
Question: Please help with a problem I'm having styling angular material tabs.
See: https://stackblitz.com/edit/angular-6-material-tab-problem?file=app/tab.component.html
[Edit: The stackblitz example is working as expected after making the changes that I noted in my answer]
The first tab is close to how I'd like it:
Overflow-y scrollbar is available when content in mat-card-content
exceeds visible space in tab.
No scrollbars on the mat-card or mat-tab Should be able to
appropriately resize.
On window resize, for example, the height of the mat-tab-group could
grow taller -- the mat-card-content should also grow taller and be
scrollable, if needed.
There is a problem though when the length of the contents in the mat-card-content don't exceed the height of the tab.
For example, with the same styling though, the second tab isn't right. The red rectangle of the mat-card should fill the height of the mat-tab. There is also a scrollbar on the far right which isn't needed.
Here is the template:
```<mat-tab-group>
<mat-tab label="Properties">
<mat-card class="scrollable-content">
<mat-card-header>
<mat-card-title>Card Data</mat-card-title>
</mat-card-header>
<mat-card-content>
My Content for this card...<br>
0 Lots and lots of content.<br>
1 Lots and lots of content.<br>
2 Lots and lots of content.<br>
3 Lots and lots of content.<br>
4 Lots and lots of content.<br>
5 Lots and lots of content.<br>
6 Lots and lots of content.<br>
7 Lots and lots of content.<br>
8 Lots and lots of content.<br>
9 Lots and lots of content.<br>
10 Lots and lots of content.<br>
11 Lots and lots of content.<br>
12 Lots and lots of content.<br>
13 Lots and lots of content.<br>
14 Lots and lots of content.<br>
15 Lots and lots of content.<br>
16 Lots and lots of content.<br>
17 Lots and lots of content.<br>
18 Lots and lots of content.<br>
19 Lots and lots of content.<br>
20 Lots and lots of content.<br>
21 Lots and lots of content.<br>
22 Lots and lots of content.<br>
23 Lots and lots of content.<br>
</mat-card-content>
</mat-card>
</mat-tab>
<mat-tab label="2nd Tab">
<mat-card class="scrollable-content">
<mat-card-header>
<mat-card-title>Other Stuff</mat-card-title>
</mat-card-header>
<mat-card-content>
2nd Tab
</mat-card-content>
</mat-card>
</mat-tab>
</mat-tab-group>
```
And here is the CSS:
```.mat-card.scrollable-content{
overflow: hidden;
display: flex;
flex-direction: column;
}
.mat-card.scrollable-content mat-card-content {
overflow-y: auto;
}
.mat-card.scrollable-content mat-card-title {
display: block;
}
.mat-tab-group {
height: 400px;
border: 2px solid #00F;
}
.mat-card-content {
border: 2px solid #0F0
}
.mat-card-title {
font-weight: bold;
font-size: 1.25em !important;
}
.mat-card {
border: 2px solid #F00;
height: 85%;
}
.mat-tab-label{
font-weight: bold;
font-size: 1.25em;
}
```
Here is the accepted answer: I was able to solve the problem after realizing that the material design elements used in the template get preprocessed into other css classes.
I got it working by adding the following to the css:
```.mat-tab-body-content {
overflow: hidden !important;
}
.mat-tab-body-wrapper {
height: 100% !important;
}
```
|
Title: Boot Linux in UEFI from repacked ISO (without writing it correctly)
Tags: boot;grub;iso-image;efi
Question: I tried to unpack an ISO of Ubuntu to USB stick without writing it (i can't use superuser utilities like dd or Rufus). Then i set my USB stick first in UEFI boot priority. I got working GRUB, but when i tried to run the system in live i got this error:
```Initramfs unpacking failed: Decoding failed;
Unable to find a medium container a live file system
Attempt interactive netboot from a URL?
```
Maybe there is a way to run it from grub shell?
Comment: To create a *bootable* USB, you need to create a *boot partition*, which is what Rufus, UNetbootin and other utilities do. See https://alternativeto.net/software/unetbootin/ for alternatives, but you'll need to run as sudo to install. If you have access to a Windows machine, you can create a USB there with Rufus.
Comment: @DrMoishePippik but it seems to run .efi files fine even without boot partition on USB stick.
Here is another answer: After some tries i found out i can repack Arch (it was archlinux-2020.10.01-x86_64 image) and some UEFIs will be able to run it's grub. The Arch itself won't run but it will startup an emergency shell (which seems to not be a thing in Ubuntu). This shell have mount and dd utilities, so it is possible to create a bootable device afterwards. And you don't need superuser rights to just repack an ISO to USB stick (in fact, it was done with SD-card on non-root Android smartphone). It worked on 2 out of 3 laptops i tried.
So i repacked an arch.iso into SD-card and also placed ubuntu.iso into it. In Arch's emergency shell i wrote ubuntu.iso into the USB stick by dd. And then i was finally able to run it.
|
Title: TVML & database
Tags: javascript;tvml
Question: I am currently working on my first TVML script and I want to understand how I can turn the following static code into a dynamic code. The main part is fetching the radio stations from our DB instead of hard coded into the script.
This the the code I was provided.
```//# sourceURL=application.js
/*
Copyright (C) 2015 Apple Inc. All Rights Reserved.
See LICENSE.txt for this sample’s licensing information
Abstract:
The TVMLKit application.
*/
var resourceLoader;
/**
* @description The onLaunch callback is invoked after the application JavaScript
* has been parsed into a JavaScript context. The handler is passed an object
* that contains options passed in for launch. These options are defined in the
* Swift or Objective-C client code. Options can be used to communicate to
* your JavaScript code that data and as well as state information, like if the
* the app is being launched in the background.
*
* The location attribute is automatically added to the obejct and represents
* the URL that was used to retrieve the application JavaScript.
*/
App.onLaunch = function(options) {
/**
* In this example we are passing the server BASEURL as a property
* on the options object.
*/
var javascriptFiles = [
`${options.BASEURL}js/ResourceLoader.js`,
`${options.BASEURL}js/Presenter.js`
];
/**
* evaluateScripts is responsible for loading the JavaScript files neccessary
* for you app to run. It can be used at any time in your apps lifecycle.
*
* @param - Array of JavaScript URLs
* @param - Function called when the scripts have been evaluated. A boolean is
* passed that indicates if the scripts were evaluated successfully.
*/
evaluateScripts(javascriptFiles, function(success) {
if (success) {
resourceLoader = new ResourceLoader(options.BASEURL);
var index = resourceLoader.loadResource(`${options.BASEURL}templates/Stack.xml.js`,
function(resource) {
var doc = Presenter.makeDocument(resource);
doc.addEventListener("select", Presenter.load.bind(Presenter));
doc.addEventListener("select", startPlayback);
doc.addEventListener("play", startPlayback);
navigationDocument.pushDocument(doc);
});
} else {
/*
Be sure to handle error cases in your code. You should present a readable, and friendly
error message to the user in an alert dialog.
See alertDialog.xml.js template for details.
*/
var alert = createAlert("Evaluate Scripts Error", "There was an error attempting to evaluate the external JavaScript files.\n\n Please check your network connection and try again later.");
navigationDocument.presentModal(alert);
throw ("Playback Example: unable to evaluate scripts.");
}
});
}
/**
* This convenience funnction returns an alert template, which can be used to present errors to the user.
*/
var createAlert = function(title, description) {
var alertString = `<?xml version="1.0" encoding="UTF-8" ?>
<document>
<alertTemplate>
<title>${title}</title>
<description>${description}</description>
</alertTemplate>
</document>`
var parser = new DOMParser();
var alertDoc = parser.parseFromString(alertString, "application/xml");
return alertDoc
}
/**
* @description
* @param {Object} event - The 'select' or 'play' event
*/
function startPlayback(event) {
var id = event.target.getAttribute("id"),
videos = Videos[id];
/*
In TVMLKit, playback is handled entirely from JavaScript. The TVMLKit Player
handles both audio and video MediaItems in any format supported by AVPlayer. You
can also mix MediaItems of either type or format in the Player's Playlist.
*/
var player = new Player();
/*
The playlist is an array of MediaItems. Each player must have a playlist,
even if you only intend to play a single asset.
*/
player.playlist = new Playlist();
videos.forEach(function(metadata) {
/*
MediaItems are instantiated by passing two arguments to the MediaItem
contructor, media type as a string ('video', 'audio') and the url for
the asset itself.
*/
var video = new MediaItem('video', metadata.url);
/*
You can set several properties on the MediaItem. Some properities are
informational and are used to present additional information to the
user. Other properties will determine the behavior of the player.
For a full list of available properties, see the TVMLKit documentation.
*/
video.title = metadata.title;
video.subtitle = metadata.subtitle;
video.description = metadata.description;
video.artworkImageURL = metadata.artworkImageURL;
/*
ContentRatingDomain and contentRatingRanking are used together to enforce
parental controls. If Parental Controls have been set for the device and
the contentRatingRanking is higer than the device setting, the user will
be prompted to enter their device Parental PIN Code in order to play the
current asset.
*/
video.contentRatingDomain = metadata.contentRatingDomain;
video.contentRatingRanking = metadata.contentRatingRanking;
/*
The resumeTime is used to communicate the time at which a user previously stopped
watching this asset, a bookmark. If this property is present the user will be
prompted to resume playback from the point or start the asset over.
resumeTime is the number of seconds from the beginning of the asset.
*/
video.resumeTime = metadata.resumeTime;
/*
The MediaItem can be added to the Playlist with the push function.
*/
player.playlist.push(video);
});
/*
This function is a convenience function used to set listeners for various playback
events.
*/
setPlaybackEventListeners(player);
/*
Once the Player is ready, playback is started by calling the play function on the
Player instance.
*/
player.play();
}
/**
* @description Sets playback event listeners on the player
* @param {Player} currentPlayer - The current Player instance
*/
function setPlaybackEventListeners(currentPlayer) {
/**
* The requestSeekToTime event is called when the user attempts to seek to a specific point in the asset.
* The listener is passed an object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - currentTime: this attribute represents the current playback time in seconds
* - requestedTime: this attribute represents the time to seek to in seconds
* The listener must return a value:
* - true to allow the seek
* - false or null to prevent it
* - a number representing an alternative point in the asset to seek to, in seconds
* @note Only a single requestSeekToTime listener can be active at any time. If multiple eventListeners are added for this event, only the last one will be called.
*/
currentPlayer.addEventListener("requestSeekToTime", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\ncurrent time: " + event.currentTime + "\ntime to seek to: " + event.requestedTime) ;
return true;
});
/**
* The shouldHandleStateChange is called when the user requests a state change, but before the change occurs.
* The listener is passed an object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the name of the event
* - state: this attribute represents the state that the player will switch to, possible values: playing, paused, scanning
* - oldState: this attribute represents the previous state of the player, possible values: playing, paused, scanning
* - elapsedTime: this attribute represents the elapsed time, in seconds
* - duration: this attribute represents the duration of the asset, in seconds
* The listener must return a value:
* - true to allow the state change
* - false to prevent the state change
* This event should be handled as quickly as possible because the user has already performed the action and is waiting for the application to respond.
* @note Only a single shouldHandleStateChange listener can be active at any time. If multiple eventListeners are added for this event, only the last one will be called.
*/
currentPlayer.addEventListener("shouldHandleStateChange", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\nold state: " + event.oldState + "\nnew state: " + event.state + "\nelapsed time: " + event.elapsedTime + "\nduration: " + event.duration);
return true;
});
/**
* The stateDidChange event is called after the player switched states.
* The listener is passed an object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - state: this attribute represents the state that the player switched to
* - oldState: this attribute represents the state that the player switched from
*/
currentPlayer.addEventListener("stateDidChange", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\noldState: " + event.oldState + "\nnew state: " + event.state);
});
/**
* The stateWillChange event is called when the player is about to switch states.
* The listener is passed an object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - state: this attribute represents the state that the player switched to
* - oldState: this attribute represents the state that the player switched from
*/
currentPlayer.addEventListener("stateWillChange", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\noldState: " + event.oldState + "\nnew state: " + event.state);
});
/**
* The timeBoundaryDidCross event is called every time a particular time point is crossed during playback.
* The listener is passed an object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - boundary: this attribute represents the boundary value that was crossed to trigger the event
* When adding the listener, a third argument has to be provided as an array of numbers, each representing a time boundary as an offset from the beginning of the asset, in seconds.
* @note This event can fire multiple times for the same time boundary as the user can scrub back and forth through the asset.
*/
currentPlayer.addEventListener("timeBoundaryDidCross", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\nboundary: " + event.boundary);
}, [30, 100, 150.5, 180.75]);
/**
* The timeDidChange event is called whenever a time interval has elapsed, this interval must be provided as the third argument when adding the listener.
* The listener is passed an object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - time: this attribute represents the current playback time, in seconds.
* - interval: this attribute represents the time interval
* @note The interval argument should be an integer value as floating point values will be coerced to integers. If omitted, this value defaults to 1
*/
currentPlayer.addEventListener("timeDidChange", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\ntime: " + event.time + "\ninterval: " + event.interval);
}, { interval: 10 });
/**
* The mediaItemDidChange event is called after the player switches media items.
* The listener is passed an event object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - reason: this attribute represents the reason for the change; possible values are: 0 (Unknown), 1 (Played to end), 2 (Forwarded to end), 3 (Errored), 4 (Playlist changed), 5 (User initiated)
*/
currentPlayer.addEventListener("mediaItemDidChange", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\nreason: " + event.reason);
});
/**
* The mediaItemWillChange event is when the player is about to switch media items.
* The listener is passed an event object with the following attributes:
* - type: this attribute represents the name of the event
* - target: this attribute represents the event target which is the player object
* - timeStamp: this attribute represents the timestamp of the event
* - reason: this attribute represents the reason for the change; possible values are: 0 (Unknown), 1 (Played to end), 2 (Forwarded to end), 3 (Errored), 4 (Playlist changed), 5 (User initiated)
*/
currentPlayer.addEventListener("mediaItemWillChange", function(event) {
console.log("Event: " + event.type + "\ntarget: " + event.target + "\ntimestamp: " + event.timeStamp + "\nreason: " + event.reason);
});
}
/**
* An object to store videos and playlists for ease of access
*/
var Videos = {
//Top Station
video1: [{
title: "The Breeza",
url: "http://stream4.radiomonitor.com/Breeze-Basingstoke-128?token=Y4aDY2KFgWJoiYdzpLeyq5d/j5Klu7uYpca8nJd/fmNq"
}],
video2: [{
title: "The Arrow",
url: "http://media-ice.musicradio.com/ArrowMP3"
}],
video3: [{
title: "MiX Radio",
url: "http://fr5.1mix.co.uk:8020"
}],
video4: [{
title: "Jazz FM",
url: "http://adsi-e-01-cr.sharp-stream.com/jazzfm.aac"
}],
video5: [{
title: "Classic FM Live",
url: "http://vis.media-ice.musicradio.com/ClassicFMMP3"
}],
video6: [{
title: "Fun Kids",
url: "http://icy-e-01.sharp-stream.com/funkids.mp3"
}],
video7: [{
title: "Imagine FM",
url: "http://stream4.radiomonitor.com:80/Imagine"
}],
video8: [{
title: "Heort 106.2",
url: "http://ice-the.musicradio.com/HeartLondonMP3"
}],
video9: [{
title: "Manx Radio",
url: "http://icy-event-04.sharp-stream.com/manxradioam.mp3"
}],
video10: [{
title: "Smooth 102.2",
url: "http://media-ice.musicradio.com/SmoothUKMP3"
}],
//80s
video80s1: [{
title: "Absolute Radio",
url: "http://stream.timlradio.co.uk:80/ABSOLUTE60SIRAACS"
}],
video80s2: [{
title: "BR",
url: "http://str1.sad.ukrd.com:80/2br"
}],
video80s3: [{
title: "Classic FM Live",
url: "http://vis.media-ice.musicradio.com/ClassicFMMP3"
}],
//90s
video90s1: [{
title: "Absolute Radio",
url: "http://stream.timlradio.co.uk:80/ABSOLUTE90SIRAACS"
}],
video90s2: [{
title: "Classic FM",
url: "http://media-ice.musicradio.com/ClassicFMMP3"
}],
video90s3: [{
title: "Imagine FM",
url: "http://stream4.radiomonitor.com:80/Imagine"
}],
video90s4: [{
title: "Island FM",
url: "http://sharpflow.sharp-stream.com:8000/tindleisland.mp3"
}],
//Pop
videopop1: [{
title: "Fun Kids",
url: "http://icy-e-01.sharp-stream.com/funkids.mp3"
}],
videopop2: [{
title: "The Breeza",
url: "http://stream4.radiomonitor.com/Breeze-Basingstoke-128?token=Y4aDY2KFgWJoiYdzpLeyq5d/j5Klu7uYpca8nJd/fmNq"
}],
videopop3: [{
title: "Smooth 102.2",
url: "http://media-ice.musicradio.com/SmoothUKMP3"
}],
//Jazz
videojazz1: [{
title: "Jazz FM",
url: "http://adsi-e-01-cr.sharp-stream.com/jazzfm.aac"
}],
videojazz2: [{
title: "The Arrow",
url: "http://media-ice.musicradio.com/ArrowMP3"
}],
videojazz3: [{
title: "Heart 106.2",
url: "http://ice-the.musicradio.com/HeartLondonMP3"
}],
//hiphop
videohiphop1: [{
title: "MiX Radio",
url: "http://fr5.1mix.co.uk:8020"
}],
videohiphop2: [{
title: "Manx Radio",
url: "http://icy-event-04.sharp-stream.com/manxradioam.mp3"
}],
videohiphop3: [{
title: "Capital FM",
url: "http://media-ice.musicradio.com/CapitalMP3"
}],
//All Station
videoAll1: [{
title: "Heort 106.2",
url: "http://ice-the.musicradio.com/HeartLondonMP3"
}],
videoAll2: [{
title: "Manx Radio",
url: "http://icy-event-04.sharp-stream.com/manxradioam.mp3"
}],
videoAll3: [{
title: "Capital FM",
url: "http://media-ice.musicradio.com/CapitalMP3"
}],
videoAll4: [{
title: "Classic FM",
url: "http://media-ice.musicradio.com/ClassicFMMP3"
}],
videoAll5: [{
title: "Imagine FM",
url: "http://stream4.radiomonitor.com:80/Imagine"
}],
videoAll6: [{
title: "MiX Radio",
url: "http://fr5.1mix.co.uk:8020"
}],
videoAll7: [{
title: "The Arrow",
url: "http://media-ice.musicradio.com/ArrowMP3"
}],
videoAll8: [{
title: "Jazz FM",
url: "http://adsi-e-01-cr.sharp-stream.com/jazzfm.aac"
}],
videoAll9: [{
title: "Smooth 102.2",
url: "http://media-ice.musicradio.com/SmoothUKMP3"
}],
videoAll10: [{
title: "The Breeza",
url: "http://stream4.radiomonitor.com/Breeze-Basingstoke-128?token=Y4aDY2KFgWJoiYdzpLeyq5d/j5Klu7uYpca8nJd/fmNq"
}],
};
```
Now you'll notice the last part. I need it to get it from for example "https://example.com/stations" normally you would have a script do this, but this is just a JS file. any suggestions?
```var Videos = {
//Top Station
video1: [{
title: "The Breeza",
url: "http://stream4.radiomonitor.com/Breeze-Basingstoke-128?token=Y4aDY2KFgWJoiYdzpLeyq5d/j5Klu7uYpca8nJd/fmNq"
}],
video2: [{
title: "The Arrow",
url: "http://media-ice.musicradio.com/ArrowMP3"
}],
video3: [{
title: "MiX Radio",
url: "http://fr5.1mix.co.uk:8020"
}],
video4: [{
title: "Jazz FM",
url: "http://adsi-e-01-cr.sharp-stream.com/jazzfm.aac"
}],
video5: [{
title: "Classic FM Live",
url: "http://vis.media-ice.musicradio.com/ClassicFMMP3"
}],
video6: [{
title: "Fun Kids",
url: "http://icy-e-01.sharp-stream.com/funkids.mp3"
}],
video7: [{
title: "Imagine FM",
url: "http://stream4.radiomonitor.com:80/Imagine"
}],
video8: [{
title: "Heort 106.2",
url: "http://ice-the.musicradio.com/HeartLondonMP3"
}],
video9: [{
title: "Manx Radio",
url: "http://icy-event-04.sharp-stream.com/manxradioam.mp3"
}],
video10: [{
title: "Smooth 102.2",
url: "http://media-ice.musicradio.com/SmoothUKMP3"
}],
//80s
video80s1: [{
title: "Absolute Radio",
url: "http://stream.timlradio.co.uk:80/ABSOLUTE60SIRAACS"
}],
video80s2: [{
title: "BR",
url: "http://str1.sad.ukrd.com:80/2br"
}],
video80s3: [{
title: "Classic FM Live",
url: "http://vis.media-ice.musicradio.com/ClassicFMMP3"
}],
//90s
video90s1: [{
title: "Absolute Radio",
url: "http://stream.timlradio.co.uk:80/ABSOLUTE90SIRAACS"
}],
video90s2: [{
title: "Classic FM",
url: "http://media-ice.musicradio.com/ClassicFMMP3"
}],
video90s3: [{
title: "Imagine FM",
url: "http://stream4.radiomonitor.com:80/Imagine"
}],
video90s4: [{
title: "Island FM",
url: "http://sharpflow.sharp-stream.com:8000/tindleisland.mp3"
}],
//Pop
videopop1: [{
title: "Fun Kids",
url: "http://icy-e-01.sharp-stream.com/funkids.mp3"
}],
videopop2: [{
title: "The Breeza",
url: "http://stream4.radiomonitor.com/Breeze-Basingstoke-128?token=Y4aDY2KFgWJoiYdzpLeyq5d/j5Klu7uYpca8nJd/fmNq"
}],
videopop3: [{
title: "Smooth 102.2",
url: "http://media-ice.musicradio.com/SmoothUKMP3"
}],
//Jazz
videojazz1: [{
title: "Jazz FM",
url: "http://adsi-e-01-cr.sharp-stream.com/jazzfm.aac"
}],
videojazz2: [{
title: "The Arrow",
url: "http://media-ice.musicradio.com/ArrowMP3"
}],
videojazz3: [{
title: "Heart 106.2",
url: "http://ice-the.musicradio.com/HeartLondonMP3"
}],
//hiphop
videohiphop1: [{
title: "MiX Radio",
url: "http://fr5.1mix.co.uk:8020"
}],
videohiphop2: [{
title: "Manx Radio",
url: "http://icy-event-04.sharp-stream.com/manxradioam.mp3"
}],
videohiphop3: [{
title: "Capital FM",
url: "http://media-ice.musicradio.com/CapitalMP3"
}],
//All Station
videoAll1: [{
title: "Heort 106.2",
url: "http://ice-the.musicradio.com/HeartLondonMP3"
}],
videoAll2: [{
title: "Manx Radio",
url: "http://icy-event-04.sharp-stream.com/manxradioam.mp3"
}],
videoAll3: [{
title: "Capital FM",
url: "http://media-ice.musicradio.com/CapitalMP3"
}],
videoAll4: [{
title: "Classic FM",
url: "http://media-ice.musicradio.com/ClassicFMMP3"
}],
videoAll5: [{
title: "Imagine FM",
url: "http://stream4.radiomonitor.com:80/Imagine"
}],
videoAll6: [{
title: "MiX Radio",
url: "http://fr5.1mix.co.uk:8020"
}],
videoAll7: [{
title: "The Arrow",
url: "http://media-ice.musicradio.com/ArrowMP3"
}],
videoAll8: [{
title: "Jazz FM",
url: "http://adsi-e-01-cr.sharp-stream.com/jazzfm.aac"
}],
videoAll9: [{
title: "Smooth 102.2",
url: "http://media-ice.musicradio.com/SmoothUKMP3"
}],
videoAll10: [{
title: "The Breeza",
url: "http://stream4.radiomonitor.com/Breeze-Basingstoke-128?token=Y4aDY2KFgWJoiYdzpLeyq5d/j5Klu7uYpca8nJd/fmNq"
}],
```
Comment: You could convert your Videos object to JSON. Then you can parse it and use it to feed your template.
|
Title: Allow user to update/delete certain policies(Hashicorp Vault)
Tags: hashicorp-vault;vault
Question: Description
I am using Hashicorp's Vault ,version 1.7.0, free version.
I would like to allow a certain range of policies that a user can assign/delete to a group. In that way he can add or delete entities user to the group from the UI.
What I have done
Bellow is written into blocks the overall policy file.
```{
capabilities = ["list"]
}
#To show the identity endpoint from the UI
path "/identity/*"{
capabilities = ["list" ]
}
#policies that I would like the user to have the ability to #assign to the group.
path "/sys/policies/acl/it_team_leader"{
capabilities = ["read", "update", "list"]
}
path "sys/policies/acl/it_user"{
capabilities = ["read", "update","list"]
}
path "sys/policies/acl/ui_settings"{
capabilities = ["read", "update", "list"]
}
path "sys/policies/acl/personal_storage"{
capabilities = ["read", "update","list"]
}
#Group id that the user have full access
path "/identity/group/id/2c97485a-754f-657a-5a8b-62b08a3ce8cb" {
capabilities = ["sudo","read","update","create","list"]
}
```
What is the issue
Lets assume that I have an super-privileged policy that provides access to the the whole secret engine.
From the UI I am able to assign to that group the super-priveleged policy and basically allow a restricted user to assign this super policy to the whole group.
When I extended the policy with :
```path "sys/policies/acl/**super-priveleged**" {
capabilities = ["deny"]
}
```
is just restricting the policy to be read from the UI.
Appending the group path with allowed_parameters such us :
```
capabilities = ["sudo","read","update","create","list"]
denied_parameters = {
"policies" = ["it_user","it_team_leader",etc]
}
```
I receive a permission denied error(403).
Appending with denied parameters :
```path "/identity/group/id/2c97485a-754f-657a-5a8b-62b08a3ce8cb" {
capabilities = ["sudo","read","update","create","list"]
denied_parameters = {
"policies" = ["super-policy"]
}
```
is not functioning and I am still allowed to assign the super policy.
I also tried wildcards with the same result.
Is it even possible to restrict one/a range of policies that can be assigned from the Vault UI?
Thanks in advance if you made it so far.
Here is another answer: Found the solution, to restrict user to update certain policies the allowed parameters fields should encapsulate a list and add an asterisk key with an empty list.
Note : The order of the policies assigned from the UI should comply with the order that is written in the .hcl file.
```path "/identity/group/id/2c97485a-754f-657a-5a8b-62b08a3ce8cb"
{
capabilities = ["sudo","read","update","create","list"]
allowed_parameters = {
"policies" = [["policy1","policy2","policy3"]]
"*" = []
}
}
```
|
Title: Error in Ruby logs -> ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16RecordNotFound (Couldn't find Plan with 'id'=1):
Tags: ruby-on-rails;ruby;git;heroku;activerecord
Question: I a having issues with Heroku, I have a project which is working fine on C9 but when I push to Heroku it gives me the following error in the logs
```
ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16RecordNotFound (Couldn't find Plan with 'id'=1)
```
resulting in following error on my homepage ->
```
The page you were looking for doesn't exist.
```
You may have mistyped the address or the page may have moved.
home.html.erb
```<div class="row">
<div class="col-md-6">
<div class="well">
<h3 class="text-center">Basic membership</H3>
<h4>Hier inschrijven voor alle leden van het gezin.</h4>
<br />
<%= link_to "Sign up free", new_user_registration_path(plan: Plan.find(1).id), class: 'btn btn-primary btn-lg btn-block' %>
</div>
</div>
<div class="col-md-6">
<div class="well">
<h3 class="text-center">Premium membership</h3>
<h4>Niet gebruiken dit is een test object.</h4>
<br />
<%= link_to "Sign up for premium plan", new_user_registration_path(plan: Plan.find(2).id), class: 'btn btn-success btn-lg btn-block' %>
</div>
```
pages_controller
```class PagesController < ApplicationController
def home
@basic_plan = Plan.find(1)
@pro_plan = Plan.find(2)
end
def about
end
end
```
I am using devise gem for storing the users as can be seen in the following snippet
```Rails.application.routes.draw do
devise_for :users
resources :contacts
get '/about' => 'pages#about'
root 'pages#home'
```
everything is backed up to GitHub from master branch and after that pushed to Heroku via master again... It can't find the solution, help would be welcome :)
Comment: C9 database would have data in `plans` table which is not available to Heroku. You need to create these data in your production db.
Here is the accepted answer: Your Plan table may be empty or plan created with any other id, you should export DB from ```C9``` and import it to ```Heroku```
and as an extra note you can't depend on ids, ```id``` column is auto increment and if you created some planes and deleted after, then new created plans may start with id=5 as an example, and if you even force the id you still shouldn't do that.
My suggestion if you have a plan name or any other uniq column for Plan table use it:
```Plan.find_by_name('plan_name').id
```
or even use: (in case you are sure you free plan is always created first)
```Plan.order('id').first
Plan.order('id').last
```
Here is another answer: ```
ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16RecordNotFound (Couldn't find Plan with 'id'=1)
```
Clearly, you don't have ```Plan``` with ```id = 1``` in your database
and finding record with hardcoded id is bad practice:
```new_user_registration_path(plan: Plan.find(1).id)
```
You can do the same with
```new_user_registration_path(plan: 1)
```
```Plan.find(1).id``` will return you ```1``` only
OR
```new_user_registration_path(plan: Plan.find_by_name('Premium').try(:id))
```
NOTE: Also make sure you have created ```Plans``` on your production db
Comment for this answer: yeah, I assigned variables to the plan, I just tried it through hardcoding to see of that was a part of the error that did not seem the problem will change it back to the variable name :). Thx for the input. And I indeed do not have my db properly migrated to Heroku pretty new to this stuff...
Here is another answer: I had the same problem, but being so new to ruby/rails, I had no idea how to add the plan to Heroku db..As mentioned above by everyone :P After a lot of time clicking through stackoverflow, I finally figured it out. So Simple!
For any other noobs like me, here is how you create a table in Heroku. (plans table in this situation)
```$ Heroku run rails console
> Plan.create(name: 'basic', price: 0)
> Plan.create(name: 'pro', price: 10)
```
Then check if it works:
```> Plan.all
```
This should list your plans and their corresponding 'id'.
Hope this helps!
Here is another answer: Check your heroku database. Most likely, your plans table is empty.
Comment for this answer: rake db:migrate only creates and updates your db schema. To load default data, you'll want to use seeds.rb and run rake db:seed.
Comment for this answer: Ok, this seems to be the case I used "heroku run rails console" and checked Plan.all came up empty don't really understand how this is though I migrated my database several times, using "heroku run rake db:migrate" I guess I am using the wrong command?
Comment for this answer: Thx for the help now atleast I know what my fault was :)
|
Title: Update query updates all records except the name
Tags: c#;sql;windows-forms-designer
Question: I am stuck with an issue with updating. When I open my Windows form developed in C# using SQL, the update updates all fields but not the name. Could you tell me what I did wrong?
Here is my code
``` public void cc()
{
cbBname.Items.Clear();
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.CommandText = "select * from BkhurData";
db.ExeNonQuery(cmd);
DataTable dt = new DataTable();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
foreach (DataRow dr in dt.Rows)
{
cbBname.Items.Add(dr["Name"].ToString());
}
}
private void BkhurUpdate_Click(object sender, EventArgs e)
{
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.CommandText = "update BkhurData set Name='" + tbBpname.Text + "',Details='" + tbBpdetails.Text + "',Price='" + tbBpprice.Text + "',Size='" + tbBpsize.Text + "', Quantity ='"+tbBpquantity.Text+"' where Name = '" + tbBpname.Text + "'";
db.ExeNonQuery(cmd);
tbBpname.Text = "";
tbBpdetails.Text = "";
tbBpprice.Text = "";
tbBpsize.Text = "";
tbBpquantity.Text = "";
cc();
MessageBox.Show("updated successfully");
}
```
Comment: Where are you populating the variables? I would debug by getting the values here and then running your statement directly against SQL and compare.
Comment: This code is crazy vulnerable to sql injection
Comment: Can you show us what the query looks like after it parses the values of the textbox?
Comment: This is where i am populating the variables public class Information3
{
//Storing String
public string bkhurname { get; set; }
public string bkhurdetails { get; set; }
public string bkhurprice { get; set; }
public string bkhursize { get; set; }
public string bkhurquantity { get; set; }
Comment: Has the name in the `tbBpname.Text` actually changed? And if it has changed how do you expect the where statment to work.
Here is another answer: You are updating the ```Name``` where the ```Name``` equals ```tbBpname.Text``` so it will not change the name for row you are updating.
```private string originalName;
public void cc()
{
cbBname.Items.Clear();
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.CommandText = "select * from BkhurData";
db.ExeNonQuery(cmd);
DataTable dt = new DataTable();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
foreach (DataRow dr in dt.Rows)
{
cbBname.Items.Add(dr["Name"].ToString());
originalName = dr["Name"].ToString()
}
}
private void BkhurUpdate_Click(object sender, EventArgs e)
{
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.CommandText = "update BkhurData set Name='" + tbBpname.Text + "',Details='" + tbBpdetails.Text + "',Price='" + tbBpprice.Text + "',Size='" + tbBpsize.Text + "', Quantity ='"+tbBpquantity.Text+"' where Name = '" + originalName+ "'";
db.ExeNonQuery(cmd);
tbBpname.Text = "";
tbBpdetails.Text = "";
tbBpprice.Text = "";
tbBpsize.Text = "";
tbBpquantity.Text = "";
cc();
MessageBox.Show("updated successfully");
}
```
Please note you should use SqlParameters and not build the query as a string.
Comment for this answer: How can i fix this issue when i take off the where it updates all the records to the name i selected for example if i had a records called "Hi" and "Bye" and modifying "bye" to "Good" it updates all records both "Hi" and "Bye" to "Good"
Comment for this answer: Could you please show me an example on how to fix this im really confused >.<
Comment for this answer: i have fixed the issue with making the where condition reads the name from the combobox instead of the textbox
Comment for this answer: You have to store the original name and use this in the where statement.
Comment for this answer: Updated my answer...hope it helps
|
Title: How can I set the start and end calendar dates in dojox.widget.calendar?
Tags: javascript;dojo
Question: How can I set the start and end calendar dates in ```dojox.widget.calendar```?
I'm trying to restrict the user from navigating outside those dates.
Here is another answer: Did you try putting the max and min dates?
See the example below
```<div maxlength="12"
data-dojo-type="dijit/form/DateTextBox"
data-dojo-props="required: true, constraints:{min:'1880-01-01',
max: '2020-01-01'}, popupClass: 'dojox.widget.Calendar'">
```
|
Title: How can I do block-oriented disk I/O with Java? Or similar for a B+ tree
Tags: java;io;backend;b-tree
Question: I would like to implement an B+ tree in Java and try to optimize it for disk based I/O. Is there an API for accessing individual disk blocks from Java? or is there an API that can do similar block-oriented access that fits my purpose?
I would like to create something like Tokyo Cabinet in 100% Java. Is there anyone that knows what Java only databases like JavaDB is using in the back-end for this?
I know that there are probably other languages than Java that can do this better, but I do this in a learning purpose only.
Here is the accepted answer: Off the top of my head I think that MappedByteBuffer in NIO could be your best bet. It's basically as-direct-as-possible mapping between HDD and Java.
|
Title: XSD: meaning of "URL attribute" of xs:schema element
Tags: xml;url;xsd;namespaces
Question: I think I understand what is the main purpose of namespaces in XML, however I do not understand why there are so many attributes. W3C school presents the following example:
```<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.w3schools.com"
xmlns="http://www.w3schools.com"
elementFormDefault="qualified">
...
...
</xs:schema>
```
What is the difference between xmlns:xs, targetNamespace and xmlns ? Why should we specify three different URLs if none of them is "useful" to the schema? And in what this URL should be different?
Here is the accepted answer: xmlns:xs="http://www.w3.org/2001/XMLSchema"
This declares a namespace alias 'xs' for the namespace "http://www.w3.org/2001/XMLSchema". This is the namespace that defines the structure of an XML schema (XSD). All XML Schemas must be in this namespace.
xs:schema
The root element of the document, we can tell this is an XML schema document as the xs alias refers to the namespace "http://www.w3.org/2001/XMLSchema".
targetNamespace="http://www.w3schools.com"
This is the target namespace of the schema. This is the namespace that all the elements in this schema will be a part of. When you create an XML document that complies to this schema then the elements must be qualified with the namespace "http://www.w3schools.com".
This can be omitted from the schema in which case all the element exist in an empty namespace. This is bad practice as when you get given an XML document like this, its difficult to tell what kind of XML document you are looking at (you can imagine a lot of companies creating schemas that describe an Invoice, all of which are specific to the company that created them).
xmlns="http://www.w3schools.com"
This sets the default namespace, it basically says that any items that you find from now on (that are not qualified with a namespace alias i.e. xs:element) are considered to be in this namespace. The reason for adding this is, it makes it possible to reference items within your schema, say you declare a , because you have a targetnamespace set, the qualified name for this type is AddressType@http://www.w3schools.com, you can only use it like this because the value address type is is resolved using the default namespace (http://www.w3schools.com). You may often see the namespace used for the targetnamespace being aliased like this xmlns:ns="http://www.w3schools.com" instead. In which case when you would see this in the schema where the AddressType is explicitly qualified.
elementFormDefault="qualified"
This is more complicated, and can largely be ignored. It is set on almost every schemas you will come across (its good practice to set it on any you create).
So whats it do? Put simply it controls how you qualify namespaces in the output XML document. If its set to qualified then all elements must be be qualified in the XML document.
```<ns:root xmlns:ns="http://www.w3schools.com">
<ns:other/>
</ns:root>
```
If its set to unqualified (or omitted - in which case it defaults to unqualified) then you don't need to qualify the child items in the XML document, its assumed because the parent is in a given namespace its children are to (note other has no namespace alias).
```<ns:root xmlns:ns="http://www.w3schools.com">
<other/>
</ns:root>
```
|
Title: How to catch the model update signal in ListView
Tags: qt;qml;qtquick2
Question: Is there any way to catch the model update signal in qml.
here is my sample program. i have a rectangle on top of that there is listview.
on mouse i am updating the listmodel.
code:
```Rectangle{
id: root
anchors.fill: parent
ListModel {
id: fruitModel
ListElement {
name: "Apple"
cost: 2.45
}
ListElement {
name: "Orange"
cost: 3.25
}
ListElement {
name: "Banana"
cost: 1.95
}
}
Component {
id: fruitDelegate
Row {
spacing: 10
Text { text: name }
Text { text: '$' + cost }
}
}
ListView {
id: list
anchors.fill: parent
model: fruitModel
delegate: fruitDelegate
onModelChanged: {
console.log("hi heloooo")
}
}
MouseArea{
anchors.fill: parent
onClicked: {
fruitModel.append({"cost": 5.95, "name":"Pizza"})//added new
fruitModel.remove(1) // deleted old. so count still same
}
}
}
```
On mouse click i am updating the model, i just want catch when ever there is a change in model.
Comment: @skypjack Thanks for replay. i got it.
Here is another answer: What does it mean that you change the model? If you are interested in items added or removed, you can bind a listener to the ```onCountChanged``` signal of the ```ListView```.
Comment for this answer: Please have a look into updated code. i am updating the list but count will same. in that case onCountChanged is not getting called
|
Title: Starfield optimization in libgdx
Tags: android;libgdx
Question: I want to create a static starfield in libgdx.
My first way was: create a Decal and a DecalBatch over it.
When I draw the Decal I use a Billboarding technic on the Decal
```star.decal.setRotation(camera.direction, camera.up);
```
next: I wanted to animate the alphas on the decals, so I created on a random way some time:
```star.decal.setColor(1, 1, 1, 0.6f+((float) Math.random()*0.4f) );
```
It is working, but my FPS went down from 55 FPS to 25 FPS (because of my 500-1000 stars)
Can I use only one batch call in any way? Maybe a particleMaterial with only one Vertex list and with a GL_POINT mode that is always face to front of my camera?
How can I do this in libgdx?
Comment: Have you solved it? Do you have some example code?
Comment: what do you mean One Batch Call? also you have a lot animated stars on android device.
Here is the accepted answer: The Batch is way to complex than what you need , on every frame it needs to copy all the vertices of the sprites in another array and do calculations on them to find the scale rotation etc..
As you suspect GL_POINT sprites will be way faster and in a medium range device it should be able to render in 60 fps like 2000 points that have different position and color
here is some old code of mine ,its in c and it uses opengl es 1.1 and propably there will be a more simple way to do it in libgdx
```glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable (GL_POINT_SPRITE_OES);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, TXTparticle);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(30);
glColorPointer(4, GL_FLOAT, 32, particlesC);//particlesC the vertices color
glVertexPointer(3, GL_FLOAT, 24, particlesV);//particlesV the vertices
glDrawArrays(GL_POINTS, 0, vertvitLenght/6);
glDisable( GL_POINT_SPRITE_OES );
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
```
|
Title: Java: how to pass and use this array in another class
Tags: java;testng
Question: I've got a CSV reader class and I've got a user creator class.
I want the user creator class to take the array, generated by the CSV reader and assign the data to some variables, but Im getting a nullpointerexception
Here is the CSV reader class:
```public class CSVData {
private static final String FILE_PATH="C:\\250.csv";
@Test
public static void main() throws Exception {
CSVReader reader = new CSVReader(new FileReader(FILE_PATH));
ArrayList<ArrayList<String>> array = new ArrayList<ArrayList<String>>();
String[] nextLine;
while ((nextLine = reader.readNext()) != null) {
ArrayList<String> list = new ArrayList<String>();
for (int i=0;i<5;i++) { //5 is the number of sheets
list.add(nextLine[i]);
}
array.add(list);
}
/*for(int x=0;x<array.size();x++) {
for(int y=0;y<array.get(x).size();y++) {
}
}*/
AppTest3 instance = new AppTest3();
instance.settingVariables(array);
reader.close();
}
}
```
And here is the user creator class
```public class AppTest3 extends AppData (which extends CSVData) {
private String[] firstname;
private String[] lastname;
public void settingVariables(ArrayList<ArrayList<String>> array) {
int randomUser1 = randomizer (1, 250);
int randomUser2 = randomizer (1, 250);
String firstname1 = array.get(randomUser1).get(0);
String firstname2 = array.get(randomUser2).get(0);
String lastname1 = array.get(randomUser1).get(1);
String lastname2 = array.get(randomUser2).get(1) ;
//firstname = { firstname1, firstname2 }; //this doesnt work, dunno why
//lastname = { lastname1, lastname2 };
firstname[0] = firstname1.replace(" ", "");
firstname[1] = firstname2.replace(" ", "");
lastname[0] = lastname1.replace(" ", "");
lastname[1] = lastname2.replace(" ", "");
}
@Parameters({ "driver", "wait" })
@Test(dataProvider = "dataProvider")
public void oneUserTwoUser(WebDriver driver, WebDriverWait wait)
throws Exception {
// A user signs up, then signs out, then a second user signs up
for (int y = 0; y < 2; y++) {
String email = firstname[y].toLowerCase() + randomNumber + "@"
+ lastname[y].toLowerCase() + emailSuffix;
//test here
}
}
```
This is the error message
```FAILED: oneUserTwoUser(org.openqa.selenium.support.events.EventFiringWebDriver@5efed246, org.openqa.selenium.support.ui.WebDriverWait@2b9f2263)
java.lang.NullPointerException
at com.pragmaticqa.tests.AppTest3.oneUserTwoUser(AppTest3.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
```
PS: System.out.println(firstname[0]); doesnt display anything in the console.
System.out.println(array); displays the list of arrays.
EDIT: I found out what the problem was:
first of all I changed the way I initialize string[] to this
```String[] firstname = new String[2];
String[] lastname = new String[2];
```
Now firstname[0] returns a value.
However, when I try to system.out.println firstname[0] in the next method that actually contains the test case, it returns null.
So I have to find a way to pass those strings to that method.
Comment: From where you get the index y???
Comment: 1 sheet in the csv file with 5 columns.
Comment: I removed the loop, that provides Y, I will bring it back in the code, sorry.
Comment: How many sheets in one line?
Here is the accepted answer: I've found a solution to this problem by heavily refactoring the code.
Its all explained here
Java: how can I put two String[] objects into one object and pass them to the next method?
Here is another answer: In you ```main``` method ```instance``` is just a local variable that is lost after method execution ends.:
``` AppTest3 instance = new AppTest3(); // just local variable that is lost after method execution ends.
instance.settingVariables(array);
```
Thus, ```oneUserTwoUser``` is invoked on another instance that has no parameters set. You can see this with debugger.
You can put initialization to method ```before``` as below into Apptest3 class:
```public class AppTest3 {
AppTest3 instance; // field used by tests
@BeforeSuite(alwaysRun = true)
public void before() throws Exception {
CSVReader reader = new CSVReader(new FileReader(FILE_PATH));
ArrayList<ArrayList<String>> array = new ArrayList<ArrayList<String>>();
String[] nextLine;
while ((nextLine = reader.readNext()) != null) {
ArrayList<String> list = new ArrayList<String>();
for (int i=0;i<5;i++) { //5 is the number of sheets
list.add(nextLine[i]);
}
array.add(list);
}
instance = new AppTest3(); /// init your instance here
instance.settingVariables(array);
reader.close();
}
}
```
Comment for this answer: it is much easier now. Put a breakpoint there and see what `array` actually contains.
Comment for this answer: If they are still null, then `settingVariable` method is not executing. you can check it by calling `before` method inside `oneUserTwoUser` method
Comment for this answer: you are right. i did that and now my test starts, opening a firefox instance and it freezes. I also made that "instance" variable public, but nothing has changed.
Comment for this answer: FAILED CONFIGURATION: @BeforeSuite before
java.lang.NullPointerException
at com.pragmaticqa.tests.AppTest3.settingVariables(AppTest3.java:32)
at com.pragmaticqa.tests.CSVData.before(CSVData.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
Comment for this answer: firstname[0] = firstname1.replace(" ", "");
Comment for this answer: Thanks to the technique you taught me I was able to debug a little bit better and I found out that the actual test method doesnt know the value of firstname[0]. So I need to find a way to pass these to that method. I've edited my question @Tala
Comment for this answer: I have refactored my code and posted the new question http://stackoverflow.com/questions/18145005/java-how-can-i-put-two-string-objects-into-one-object-and-pass-them-to-the-ne?noredirect=1
|
Title: Add a vertical separator between icons and text in a menu
Tags: python;qt;menu;pyqt;pyqt5
Question: I have a question about how to add a separator between icons and text in menu. If you have any ideas, it would be really helpful. Here is exactly what I need to do:
.
From a button, open a menu and add separators like in the image.
Comment: Looks more like a Menubar to me [docs](http://pyqt.sourceforge.net/Docs/PyQt4/qmenubar.html). In menu you can add icons and text, usually done through QAction.
Comment: Yes you were right ! it's a QMenubar :) Thanks
Here is the accepted answer: if you are using a QMenu() object you can use addSeparator():
```menu = QMenu()
add_action = menu.addAction("Add")
menu.addSeparator()
rename_action = menu.addAction("Rename")
```
Comment for this answer: I tried but it's only horizontal separator, is it possible to make it vertical. Here is the result I got : http://www.zupmage.eu/i/ZjvAjLmSFx.jpg
Here is another answer: If you create the menu yourself by using a ```QWidget``` it's easy. Just implement the ```paintEvent``` and draw the lines where you need them.
Comment for this answer: In fact, I'm dynamically changing the menu (adding and removing actions) so it will be complicated if I draw lines every time items has been added or removed
Comment for this answer: That depends on how generic you implement the drawing .. as the menu will call the `paintEvent` whenever something changes. If you know the number of entries it should be an easy thing to do.
|
Title: How to get month name in jasper studio expression editor
Tags: jasper-reports
Question: I am using this command to get the date in jasperstudio expression editor
```new SimpleDateFormat("dd.MM.yyyy").format($F{day_date})
```
This will return date value example like 01.01.2010 however i want the month to have a name like 01.Jan.2010 -- please advise
Here is the accepted answer: In your specified format "dd.MM.yyyy", MM gives you the month as a number.
You can use "dd.MMM.yyyy" to get an abbreviated month name.
Similarly, if you need the full month name you can use "dd.MMMM.yyyy"
|
Title: Xquery nested map:merge from xml produces error "expected single value for key, got 0"
Tags: xpath;hashmap;xquery;exist-db
Question: In Xquery 3.1 I am trying to transform an XML document into a nested map. My xml document ```$keyworddoc``` has this structure :
``` <category xml:id="KW0003">
<desc xml:lang="fr">évêque</desc>
<desc xml:lang="en">bishop</desc>
<desc xml:lang="de">Bischof</desc>
<desc xml:lang="es">obispo</desc>
<desc xml:lang="it">vescovo</desc>
</category>
<category xml:id="KW0004">
<desc xml:lang="fr">sacrement</desc>
<desc xml:lang="en">sacrament</desc>
<desc xml:lang="de">Sakrament</desc>
<desc xml:lang="es">sacramento</desc>
<desc xml:lang="it">sacramento</desc>
</category>
<category xml:id="KW0005">
<desc xml:lang="fr">messe</desc>
<desc xml:lang="en">mass</desc>
<desc xml:lang="de">Messe</desc>
<desc xml:lang="es">misa</desc>
<desc xml:lang="it">messa</desc>
</category>
```
with the desired map output:
```map {
"KW0003": map {
"fr": "évêque",
"en": "bishop",
"de": "Bischof",
"es": "obispo",
"it": "vescovo"},
"KW0004": map {
"fr": "sacrement",
"en": "sacrament",
"de": "Sakrament",
"es": "sacramento",
"it": "sacramento"},
"KW0005": map {
"fr": "messe",
"en": "mass",
"de": "Messe",
"es": "misa",
"it": "messa"},
}
```
However, my function:
``` let $kwdoc := $keyworddoc//tei:category
return map:merge(for $kw in $kwdoc
return map{$kw/data(@xml:id) :
map:merge(for $desc in $kw
return map{$desc/data(@xml:lang) :
$desc/text()}
)})
```
produces the following error which suggests that the nested for loop is not "seeing" the variable ```$kw```?:
```Expected single value for key, got 0```
Perhaps I am going about constructing my first nested map in the wrong way.
edit: Xquery within eXist 5x.
Many thanks in advance.
Here is the accepted answer: Since ```$kw``` is bound to a single ```tei:category``` element at a time, the clause ```for $desc in $kw``` iterates over a single-element sequence and just binds the same element to ```$desc```, so it is equivalent to ```let $desc := $kw``` in this case.
What you want is to iterate over the ```tei:desc``` children of ```$kw``` instead:
```let $kwdoc := $keyworddoc//tei:category
return map:merge(
for $kw in $kwdoc
return map{
$kw/data(@xml:id): map:merge(
for $desc in $kw/tei:desc
return map{ $desc/data(@xml:lang): $desc/text() }
)
}
)
```
Comment for this answer: Works fine for me on eXist-db 5.3.0-SNAPSHOT
Comment for this answer: Which XQuery processor are you using? It works fine for me in BaseX.
Comment for this answer: Bah, that was sloppy of me. But even so, when I copy-paste your changed code I still get the same error. I stripped out all the `map` and `map:merge` functions to assure the data is coming through and it is. It makes me think something else is not right with nesting these functions.
Comment for this answer: I'm using eXist 5x.
Comment for this answer: In fact, it works fine, it was a problem when I applied it to the production file which it turns out had certain missing values. It seems I wasn't handling that correctly.
|
Title: How could i switch between Ansible versions?
Tags: ansible;versions
Question: Could i install several Ansible versions at a single OS and switch them at will?
For now we have several releases, say 1.5.4 for Ubuntu, but the latest is 2.0.1, and 1.9.4 is still around. I would appreciate install all of them and just switch to one that is suitable for me.
If yes, how?
Comment: I highly suggest running with the latest unless there are bugs that are blocking you.
Here is another answer: Ansible is just a python package, so, if you have virtualenv installed on your host it is just a matter of creating a new venv for each ansible version you want, and then pip install it.
So if for example you want ansible v1.9.5 you could do:
```$ virtualenv ~/venvs/ansible_1_9_5
$ source ~/venvs/ansible_1_9_5/bin/activate
$ pip install "ansible==1.9.5"
$ ansible --version
ansible 1.9.5
configured module search path = None
```
Here is another answer: A virtualenv per version works nicely if you're okay with only using versioned packages of Ansible. To do Ansible development or you just wanna follow the upstream source code for bug fixes (and new bugs...), you can use the following in your .zshrc (bash will work as well of course):
```function ansible-switch {
if [ "$1" != "off" ]; then
VIRTUAL_ENV_DISABLE_PROMPT=1 source $ANSIBLE_VIRTUALENV/bin/activate
git -C "$ANSIBLE_SOURCE_DIR" checkout -q $1
source $ANSIBLE_SOURCE_DIR/hacking/env-setup -q
echo "Environment configured to run Ansible from source (branch: $1)"
else
if [[ -v ANSIBLE_HOME ]]; then
export PYTHONPATH=$(echo $PYTHONPATH | sed "s@$ANSIBLE_HOME/lib:@@")
export PATH=$(echo $PATH | sed "s@$ANSIBLE_HOME/bin:@@")
export MANPATH=$(echo $MANPATH | sed "s@$ANSIBLE_HOME/docs/man:@@")
unset ANSIBLE_HOME
deactivate
fi
echo "Environment configured to not run Ansible from source"
fi
}
if ! [[ -v ANSIBLE_HOME ]]; then
ansible-switch devel > /dev/null
fi
```
You will need to define the ANSIBLE_SOURCE_DIR and ANSIBLE_VIRTUALENV variables. ANSIBLE_SOURCE_DIR is the git clone of the Ansible source code and ANSIBLE_VIRTUALENV is the virtualenv you set up with Python2 and any required Ansible dependencies (check http://docs.ansible.com/ansible/intro_installation.html#running-from-source for more info about running from source).
You can then switch to any Ansible git branch like this:
```ansible-switch devel
```
Or a tag:
```ansible-switch v51.243.225.55-1
```
You can turn off running from source like this:
```ansible-switch off
```
As a kicker, I use the following script (called 'ansible-update') to update my own Ansible fork with Ansible upstream commits:
```cd "$ANSIBLE_SOURCE_DIR"
current_branch_tag=$(git symbolic-ref --short HEAD 2>/dev/null)
if [ $? -ne 0 ]; then
current_branch=$(git describe --tags)
fi
git checkout devel -q
git fetch upstream -q
git rebase upstream/devel -q
git checkout $current_branch -q
```
This last bit assumes you have a fork and have set upstream as the official Ansible remote.
|
Title: how to achieve like sumifs function in panda dataframe
Tags: python;dataframe;data-science
Question: I want to add a column to calculate the cumulative uptime.
Below is my dataframe:
What I expect is like below:
```body_id active_date uptime cumulative_uptime
51C00195 2017/1/26 1.18 1.18
51C00232 2017/1/12 0.83 0.83
51C00232 2017/1/19 6.28 7.11
51C00232 2017/1/20 9.35 16.46
51C00232 2017/1/21 3.88 20.34
```
Above calculation could be simply achieved in excel using ```sumif```,
however in ```pandas```, I have no idea how to do it.
Could anyone help? Thanks in advance~~
Comment: What is your criteria?
Comment: What does mean uptime 1.18 or cumulative_uptime equal to 0.83. Is it in days? What exactly do you want to do?
Comment: I want the uptime to be added up.
Comment: For cumulative sums there is a `cumsum` function available in pandas. Can you provide your input data as text?
Here is another answer: I think cumsum works for it
```df['cumulative_uptime'] = df.groupby('body_id')['uptime'].cumsum()
>>> import pandas as pd
>>>
... df = pd.DataFrame({'body_id': ['51C00195','51C00232', '51C00232','51C00232','51C00232'],
... 'active_date': ['2017/1/26','2017/1/12','2017/1/19','2017/1/20','2017/1/21'],
... 'uptime': [1.18,0.83,6.28,9.35,3.88]
... })
>>>
>>> df['cumulative_uptime'] = df.groupby('body_id')['uptime'].cumsum()
>>>
>>> df
active_date body_id uptime cumulative_uptime
0 2017/1/26 51C00195 1.18 1.18
1 2017/1/12 51C00232 0.83 0.83
2 2017/1/19 51C00232 6.28 7.11
3 2017/1/20 51C00232 9.35 16.46
4 2017/1/21 51C00232 3.88 20.34
>>>
```
Comment for this answer: I think it covers. It has a cumulative value for each row
Comment for this answer: Thank you for the answer. But what I want is to have a cumulative value for each row, instead of only one value for each id.
|
Title: cxf-codegen-plugin in Gradle
Tags: maven;gradle;cxf;java-11
Question: I have below plugin in maven which is working fine. Now, I need to use the gradle version of project, so need to use the plugin in gradle. I tried below, but I am getting mentioned error. I am using Java 11. Any help/Suggestion is appreciable.
```<plugin>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-codegen-plugin</artifactId>
<version>3.4.5</version>
<executions>
<execution>
<id>generate-sources</id>
<phase>generate-sources</phase>
<configuration>
<sourceRoot>${basedir}/src/main/java</sourceRoot>
<wsdlOptions>
<wsdlOption>
<wsdl>${basedir}/src/main/resources/wsdl/CarServices.wsdl
</wsdl>
<wsdlLocation>classpath:wsdl/CarServices.wsdl
</wsdlLocation>
</wsdlOption>
</wsdlOptions>
</configuration>
<goals>
<goal>wsdl2java</goal>
</goals>
</execution>
</executions>
</plugin>
```
Below is the build.gradle file content
``` configurations {
wsdl2java
}
dependencies {
compile "org.apache.cxf:cxf-spring-boot-starter-jaxws:3.4.5"
compile 'org.apache.cxf:cxf-rt-frontend-jaxws:3.4.5'
compile 'org.apache.cxf:cxf-rt-transports-http:3.4.5'
compile 'javax.xml.ws:jaxws-api:2.3.0'
compile 'javax.jws:jsr181-api:1.0-MR1'
compile 'javax.xml.bind:jaxb-api:2.3.0'
wsdl2java 'javax.xml.bind:jaxb-api:2.3.0'
wsdl2java 'com.sun.xml.bind:jaxb-ri:2.3.0'
wsdl2java 'com.sun.xml.bind:jaxb-xjc:2.3.0'
wsdl2java 'com.sun.xml.bind:jaxb-core:2.3.0'
wsdl2java 'com.sun.xml.bind:jaxb-impl:2.3.0'
wsdl2java 'javax.xml.ws:jaxws-api:2.3.0'
wsdl2java 'javax.jws:jsr181-api:1.0-MR1'
wsdl2java 'org.apache.cxf:cxf-tools-wsdlto-core:3.4.5'
wsdl2java 'org.apache.cxf:cxf-tools-wsdlto-frontend-jaxws:3.4.5'
wsdl2java 'org.apache.cxf:cxf-tools-wsdlto-databinding-jaxb:3.4.5'
implementation 'javax.annotation:javax.annotation-api:1.3.2'
annotationProcessor("javax.annotation:javax.annotation-api:1.3.2")
}
def wsdl2java = task generateJavaFromWsdl(type: JavaExec) {
String wsdl = 'src/main/resources/wsdl/CarServices.wsdl'
String genSrcDir = "${projectDir}/build/generated-sources/CarServices"
inputs.file wsdl
outputs.dir genSrcDir
classpath configurations.wsdl2java
main "org.apache.cxf.tools.wsdlto.WSDLToJava"
args '-encoding', 'UTF-8', '-d', genSrcDir, wsdl
OutputStream baos = new ByteArrayOutputStream()
errorOutput = new OutputStream() {
void write(int b) {System.err.write(b); baos.write(b) }
void flush() { System.err.flush(); baos.flush() }
void close() { System.err.close(); baos.close() }
}
doLast {
def str = baos.toString()
if (str.contains('Usage : wsdl2java') || str.contains('WSDLToJava Error')) {
throw new TaskExecutionException(tasks[name],
new IOException('Apache CXF WSDLToJava has failed. Please see System.err output.'))
}
}
}
compileJava.dependsOn += wsdl2java
sourceSets.main.java.srcDirs = ['src/main/java', 'build/generated-sources/CarServices']
```
Getting below error while building and NO Java binding classes got created
```Caused by: java.lang.ClassNotFoundException: javax.annotation.Resource
```
Here is another answer: Answering my own question as it may help someone else. Below "build.gradle" config works with Java 11.. It creates required Java binding classes along with port class (Interface) also.. Reference links are attached.
https://ciscoo.github.io/cxf-codegen-gradle/docs/1.0.0-rc.3/user-guide/
``` buildscript {
repositories {
mavenCentral()
jcenter()
}
}
plugins {
id 'org.springframework.boot' version '2.6.2'
id 'io.spring.dependency-management' version '1.0.11.RELEASE'
id 'java'
id "io.mateo.cxf-codegen" version "1.0.0-rc.3"
}
apply plugin: 'groovy'
apply plugin: 'java'
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '11'
targetCompatibility = '11'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
implementation 'org.springframework.boot:spring-boot-starter-web-services'
implementation 'org.codehaus.groovy:groovy-all:3.0.2'
cxfCodegen "jakarta.xml.ws:jakarta.xml.ws-api:2.3.3"
cxfCodegen "jakarta.annotation:jakarta.annotation-api:1.3.5"
implementation 'javax.jws:javax.jws-api:1.1'
}
cxfCodegen {
wsdl2java {
example {
wsdl = file("${projectDir}/src/main/resources/wsdl/BLZService.wsdl")
outputDir = file("${buildDir}/generated-java")
markGenerated = true
}
}
}
compileJava.dependsOn wsdl2java
test {
useJUnitPlatform()
}
```
|
Title: d3.layout.cloud is not a function
Tags: javascript;d3.js;data-visualization
Question: *d3.layout.cloud() throws an error: d3.layout.cloud is not a function.
I have tried the answer of Adam Pearce at TypeError: d3.layout.cloud is not a function, but this does not work for me.
Can anyone tell me what i am doing wrong.
I am using d3 version 3
Files
```d3.json(inputData, (data)=>{
var cloud = d3.layout.cloud().size([800, 300])
.words(data["subtype"])
.fontSize((d)=>{ return d.values.length + "px"; })
.on("end", draw )
.start();
function draw(words){
// Text charts
const text = svgCanvas.selectAll("#text_chart")
.data(words)
.enter()
.append("text")
.attr("id", "text_chart")
.attr("fill", (d,i)=>{ return color(i) })
.attr("transform", (d,i)=>{ return "translate( 20 , "+Math.floor(Math.random()*(d.values.length*2))+")" })
.style("font-size", (d)=>{ return d.values.length + "px"; })
.style("position","absolute")
.style("top", (d)=>{return Math.floor(Math.random()*d.values.length) })
.style("left", (d)=>{return Math.floor(Math.random()*d.values.length) })
.text((d)=>{ return d.key });
}
}```
```<!DOCTYPE>
<html>
<head>
<meta charset="UTF-8"/>
<link href="https://fonts.googleapis.com/css?family=Inconsolata" rel="stylesheet">
<link rel="stylesheet" href="./css/style.css" media="screen" title="no title">
<title>Test</title>
</head>
<body>
<h2>Stuff my phone tracks</h2>
<small>Filter</small>
<label><input id="bars" type="radio" name="chartToggle" value="Bars"> Bars
</label>
<label><input id="text" type="radio" name="chartToggle" value="Text"> Text
</label>
<label><input id="all" type="radio" name="chartToggle" value="All"> All
</label>
<div id="canvas"></div>
<script src="http://d3js.org/d3.v3.min.js"></script>
<script src="https://gist.github.com/emeeks/3361332/raw/61cf57523fe8cf314333e5f60cc266351fec2017/d3.layout.cloud.js"></script> <script src="./main.js"></script>
<script src="./main.js"></script>
</body>
</html>```
Here is another answer: I had make it work but in Node.js environment.
But you need to npm install d3-cloud (in this example, I use version 1.2.5)
See below working example
index.html
```<!DOCTYPE html>
<html>
<script src="https://d3js.org/d3.v3.min.js"></script>
<script defer src="src/app.js"></script>
<head>
<title>Word Cloud Example</title>
</head>
<style>
</style>
<body>
</body>
</html>
```
/src/app.js
```import cloud from "d3-cloud";
const frequency_list = [
{ text: "text", size: 32 },
{ text: "sleight", size: 12 },
{ text: "hand", size: 12 },
{ text: "magic", size: 12 },
{ text: "future", size: 24 },
{ text: "read", size: 20 },
{ text: "mind", size: 12 },
{ text: "training", size: 24 },
{ text: "amp", size: 72 },
{ text: "question", size: 44 },
{ text: "thing", size: 32 },
];
var color = d3.scale
.linear()
.domain([0, 1, 2, 3, 4, 5, 6, 10, 15, 20, 100])
.range([
"#ddd",
"#ccc",
"#bbb",
"#aaa",
"#999",
"#888",
"#777",
"#666",
"#555",
"#444",
"#333",
"#222"
]);
cloud()
.size([800, 300])
.words(frequency_list)
.rotate(0)
.fontSize(function(d) {
return d.size;
})
.on("end", draw)
.start();
function draw(words) {
d3.select("body")
.append("svg")
.attr("width", 850)
.attr("height", 350)
.attr("class", "wordcloud")
.append("g")
// without the transform, words words would get cutoff to the left and top, they would
// appear outside of the SVG area
.attr("transform", "translate(320,200)")
.selectAll("text")
.data(words)
.enter()
.append("text")
.style("font-size", function(d) {
return d.size + "px";
})
.style("fill", function(d, i) {
return color(i);
})
.attr("transform", function(d) {
return "translate(" + [d.x, d.y] + ")rotate(" + d.rotate + ")";
})
.text(function(d) {
return d.text;
});
}
```
Example can see here:
codesandbox
Comment for this answer: @SylvanDAsh I see. Thanks for your sugeestion.
Comment for this answer: The external link might go dead in the future. Please include the actual solution in your answer
|
Title: Accessing Inner Most Set within Json and Jinja2 with Flask
Tags: python;json;flask;jinja2
Question: Working with an API that returns me JSON, I can not figure out how to access the inner most element of the value. What I am left with is a single key and value which is all the data.
```{
"results": [
{
"DataverseName": "Metadata",
"DatatypeName": "NodeGroupRecordType",
"Derived": {
"Tag": "RECORD",
"IsAnonymous": false,
"EnumValues": null,
"Record": {
"IsOpen": true,
"Fields": {
orderedlist: [
{
"FieldName": "GroupName",
"FieldType": "string"
},
{
"FieldName": "NodeNames",
"FieldType": "Field_NodeNames_in_NodeGroupRecordType"
},
{
"FieldName": "Timestamp",
"FieldType": "string"
}
]
}
},
"Union": null,
"UnorderedList": null,
"OrderedList": null
},
"Timestamp": "Wed Apr 30 15:55:24 PDT 2014"
}
]
}
```
I am attempting to access the set ```orderedlist``` and need to grab the value of ```FieldName```. In python, I am using SimpleJson and when iterating over result, everything is mixed together. Would I have to re-encode the JSON such that values are now keys, and continue till I reach ```ordredlist```?
Python:
```j = json.loads(response.text)
return render_template('fields.html', response=j, dataverse=dataset)
```
HTML / JinJa2
```{% for key, value in response.items() %}
{% for items in value %}
{{ items }}
{% endfor %}
{% endfor %}
```
Comment: Is the first chunk of code above the actual json that is returned?
Here is another answer: As your question is related to accessing inner data structure from Jinja2 template, I simplified my sample (ignoring Flask stuff). Applying my answer to Flask will be trivial.
First, your JSON data are invalid, I had to correct ```orderedlist``` enclosing it in ```"``` and I set the ```l``` to uppercase, ending ```"orderedList"```.
Then, accessing data from Jinja2 is fun - you may access it using dot notation (but can also use ["SomeName"]. Be aware, that Jinja2 will hide many problems, if data you ask for are not existing. You shall play with your template step by step from outer structures deeper and you will get what you want.
```from jinja2 import Template
import json
response_text = """
{
"results": [
{
"DataverseName": "Metadata",
"DatatypeName": "NodeGroupRecordType",
"Derived": {
"Tag": "RECORD",
"IsAnonymous": false,
"EnumValues": null,
"Record": {
"IsOpen": true,
"Fields": {
"orderedList": [
{
"FieldName": "GroupName",
"FieldType": "string"
},
{
"FieldName": "NodeNames",
"FieldType": "Field_NodeNames_in_NodeGroupRecordType"
},
{
"FieldName": "Timestamp",
"FieldType": "string"
}
]
}
},
"Union": null,
"UnorderedList": null,
"OrderedList": null
},
"Timestamp": "Wed Apr 30 15:55:24 PDT 2014"
}
]
}
"""
data = json.loads(response_text)
templ_str = """
{% for rec in data.results %}
Results: -------------
{% for fldinfo in rec.Derived.Record.Fields.orderedList %}
Field: Name: {{fldinfo.FieldName}} Type: {{fldinfo.FieldType}}
{% endfor %}
{% endfor %}
"""
template = Template(templ_str)
print template.render(data=data)
```
Running it we get:
```Results: -------------
Field: Name: GroupName Type: string
Field: Name: NodeNames Type: Field_NodeNames_in_NodeGroupRecordType
Field: Name: Timestamp Type: string
```
|
Title: Which directory does marklogic deploys the application created with appbuilder
Tags: marklogic
Question: I have created an application through Marklogic's application builder and hit the deploy button. I am trying to find the location or the path where Marklogic has deployed this application.Can someone help me where Marklogic stores this application. I have installed Marklogic in c:\Marklogic
Thanks
shashi
Here is another answer: Application Builder creates a modules database for your application. This is just like the database for your content, in that it holds documents, but these documents are the XQuery, JavaScript, CSS, etc that make up your application. Take a look at the Admin UI on port 8001 and you'll see a -modules database named for your app.
When I work with App Builder, I like to pull the application down to the file system and work with it there, but you can also edit the files in place using WebDAV.
Comment for this answer: :Yes I found the modules database and I got the generated application code css,js etc but xquery code is missing.I am using marklogic 7.I want to start customizing the app with the generated code especially the widgets.
Comment for this answer: The app-builder comprises of html/js/css on client-side. That is what you will find in the yourapp-modules database. The javascript interacts with MarkLogic through a REST api which is kind of built-in code. All widget stuff can be found in the javascript. Tweak search options XML to get more facets..
|
Title: Why can you use multiple semicolons in C?
Tags: c
Question: In C I can do the following:
```int main()
{
printf("HELLO WORLD");;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
}
```
and it works! Why is that?
My personal idea: semicolons are a NO OPERATION (from Wikipedia) indicator, having a giant string of them serves the same idea as having one and telling C that a statement has ended.
Comment: The semicolon separates statements. You aren't telling the compiler to do anything between those semicolons, so what shouldn't work about it?
Comment: @Blender: The semicolon isn't really a statement separator in C, it's really a statement terminator. There exist languages where it's a statement separator. In those languages, the semicolon after the final statement in a block is optional.
Comment: Writing code that serves no purpose is valid in any programming language. What makes you think C would make an exception?
Comment: @DietrichEpp: not necessarily optional. For example in Pascal (at least as defined by Wirth) a semicolon immediately before an `else` is an error -- the compiler won't accept it.
Here is the accepted answer: A semicolon terminates a statement... consecutive semicolons represent no-operation statements (as you say). Consider:
```while (x[i++] = y[j++])
;
```
Here, all the work is done in the loop test condition, so an empty statement is desirable. But, empty statements are allowed even when there is no controlling loop.
Why?
Well, many uses of the preprocessor may expand to some actual C code, or be removed, based on some earlier defines, but given...
``` MY_MACRO1();
MY_MACRO2();
```
...the preprocessor can only replace the ```MY_MACROX()``` text, leaving the trailing semicolons there, possibly after an empty statement. If the compiler rejected this it would be much harder to use the preprocessor, or the preprocessor calls would be less like non-preprocessor function calls (they'd have to output semicolons within the substitution, and the caller would have to avoid a trailing semicolon when using them) - which would make it harder for the implementation to seemlessly substitute clever macros for functions for performance, debugging and customisation purposes.
Here is another answer: C allows null statements. They can be useful for things like empty loops:
```while (*d++ = *s++)
; // null statement.
```
You've just created a series of them.
It also allows not-quite-null statements like:
```0;
1+1;
```
Both of these contain expressions, but with no side-effects, so they don't really do anything. They're allowed, though a compiler might warn about them.
A decent compiler won't normally generate any code for any of the above (most won't even with optimization turned off, and I can't imagine one that would with optimization turned on).
Here is another answer: Semicolons are line terminators, meaning they tell that the codes have reached the end of the line, THEN, do the next line of codes.
One proof is that you can write your codes in a single line, excluding the directives.
```main() { cout << "ENTER TWO NUMBERS"; cin >> a; cin >> b; cout << "The sum of two numbers are" << a+b; << return 0;}
```
Which could mean
main() { cout << "ENTER TWO NUMBERS"[THEN] cin >> a[THEN] cin >> b[THEN] cout << "The sum of two numbers are" << a+b[THEN] << return 0[THEN]}
so if you were to place multiple semicolons, it's like, a THEN, THEN, THEN, THEN, and your personal idea is correct indeed.
Here is another answer: Because a semicolon identify the end of a statement in C and in your case more semicolons identifies more empty statements ... There is nothing wrong, they are just empty statements .
Here is another answer: Two semicolons together make an empty statement. C doesn't mind having empty statements - they don't generate any code.
Comment for this answer: @JerryCoffin even in the for case they are empty statements, they are empty statements that evaluate to true ...
Comment for this answer: Or eventually they generate `nop` instruction
Comment for this answer: For the pedants: yes, you're undoubtedly correct: `for(;;)` contains two semicolons together, but not an empty statement.
Comment for this answer: @aleroot: The syntax for a `for` statement is: `for ( clause-1 ; expression-2 ; expression-3 ) statement`. As you can see, what's controlled is a statement, but the others are only expressions, not statements (well, clause-1 can be either an expression or a declaration, but it's still not a statement).
Comment for this answer: @JoSo: By the current standard, yes. They've changed the syntax specification a bit though, so now it's `for ( init-statement condition ; expression )`, and an `init-statement` is explicitly a statement. I'd have to dig up an old draft to be sure, but I *think* at that time, the first clause in a `for` loop was a unique thing that wasn't traceable back to an actual statement (either that or my memory is just bad--all too likely at my age).
Comment for this answer: @JerryCoffin: Technically, a declaration is a statement as well, no?
|
Title: Retrieve AutoIncrement key value when the column is NOT the first one in the table
Tags: subsonic;subsonic2.2
Question: I've got a question regarding how to retrieve the auto-increment or identity value for a column in SQL Server 2005, when said column is not the first declared column in a table.
I can get the generated value for a table just by issuing the following code:
```MyTable newRecord = new MyTable();
newRecord.SomeColumn = 2;
newRecord.Save();
return newRecord.MyIdColumn;
```
Which works fine regardles of how many other columns make up the primary key of that particular table, but the first column declared MUST be the identity column, otherwise this doesn't work.
My problem is that I have to integrate my code with other tables that are out of my reach, and they have identity columns which are NOT the first columns in those tables, so I was wondering if there is a proper workaround to my problem, or if I'm stuck using something along the lines of SELECT @@IDENTITY to manually get that value?
Many thanks in advance for all your help!
Comment: Nope, no [email protected]. "Doesn't work" means that if I have a table with two int columns, col1 and col2, and only col2 is identity, then when I do a .Save() and then check for the value of col2, it returns 0 instead of the just-generated key value. If I move col2 as the first column instead of col1 and perform the same code as before, then the generated key value is correctly returned. I've tried this many times and it always gives me consistent results; I have to have my identity column as the first column, otherwise I can't access the key value of my recently inserted record.
Comment: SubSonic doesn't set the key by ordinal position - if you're using 2.2 then after save the SetPrimarykey method goes off and asks the schema which prop to set. So - the question is, what does "doesn't work" mean? Do you get an error?
Here is another answer: From the "ewww gross" department:
Here's my workaround for now, hopefully someone may propose a better solution to what I did.
```MyTable newRecord = new MyTable();
newRecord.SomeColumn = 2;
newRecord.Save();
CodingHorror horror = new CodingHorror();
string SQL = "SELECT IDENT_CURRENT(@tableName)";
int newId = horror.ExecuteScalar<int>(SQL, "MyTable");
newRecord.MyIdColumn = newId;
newRecord.MarkClean();
return newRecord.MyIdColumn;
```
|
Title: In OQAM modulation, why does the up-sampling process be seperated into two parts?
Tags: signal-analysis
Question: The OQAM modulation is implemented by up-sampling the symbols by two and delaying the imaginary part of each sub-channel by half a symbol. The signals are up-sampled by M/2 and convolved with the impulse response of the pulse shaping filter u[n].Why should this process being separated into two parts?
Comment: I'm really curious: What system uses OQAM? For QPSK, I see the ease of implementation that explains why you'd want to trade bandwidth for limited phase jumps, but for QAMs >4 (essentially, 9QAM and up, but I've never seen anything like 9QAM, so 16QAM up), wouldn't you just code the symbols to achieve that if it was of any concern?
Comment: I mean, I've seen Offset modulations in multicarrier schemes (specifically, things that are OFDM or generalizations of OFDM), but you say "pulse shaping filter"; is this an input to an OFDM-like system?
Comment: @MarcusMüller OQAM in combination with pulse-shaping can well be used for multicarrier systems. When using a pulse shaping filter, each carrier is shaped separately, and it can be considered a generalization of the conventional CP-OFDM (which essentially uses a pulse shaping filter which is a rect of length $T+T_{CP}$ and the subcarriers are $1/T$ apart (apparently, $T$ is the symbol duration). When using a different pulse shaping filter, you likely run into the restriction of the Balian-Low Theorem (which states there is no pulse shaping filter that is well localized ...
Comment: ... in time and frequency which can achieve orthogonality between carriers with QAM modulation (when subcarrier distance is $1/T$). Here, using OQAM can give you at least quasi-orthogonality (real-domain orthogonality only) between the carriers. Have a look for OFDM-OQAM and FBMC-OQAM for more information.
Comment: I agree with most of your comment. However, I have some question about the classification. I think there are two kinds of technology to avoid ISI: OFDM & FBMC. One way to realize FBMC is OQAM/OFDM.
Comment: Well, I think this is just a terminology thing. Some people use OFDM/OQAM and FBMC/OQAM interchangably. CP-OFDM is a completely different technique (as used in LTE). To my understanding FBMC/OQAM does not require perfect reconstruction (i.e. exact real-domain orthogonality, but only "orthogonal enough"), whereas OFDM/OQAM requires perfect reconstruction (according to the name O-FDM). But, this is just terminology and it depends on you, what you use and how you define it.
Here is the accepted answer: In the end, it counts that the transmit signal of a multicarrier-OQAM system equals
$$x(t)=\sum_k\sum_m j^{k+m}d_{km} g(t-k\frac{T}{2})\exp(j2\pi\frac{mt}{T})$$
where $d_{km}$ is the real-valued data, $k$ is the symbol index and $m$ is the subcarrier index.
However you achieve this transmit signal structure, your implementation is fine. Though, it is mostly done as you describe for convenience reasons. For sure, you can perform the upsampling by $M$ points and delay one part of the QAM symbols by $M/2$ instead of first upsampling by two and then delaying one part by one sample. However, for implementation reasons, e.g. delaying by $M/2$ samples requires more memory than delaying by only one sample.
So, the answer is that it's not an algorithmic requirement, but more a convenient way of generating the OQAM multicarrier signal.
|
Title: Plotly python timedelta64 (10:43:23.123456) not showing up the same on scatter plot graph
Tags: plotly;scatter-plot;timedelta;plotly-python
Question: ```df_exec['exctn_tim']
0 10:40:01.169646
1 10:40:01.169709
Name: exctn_tim, dtype: timedelta64[ns]
```
I want to plot the graph using exctn_tim as x-axis, but when I do this in Plotly Scatter graph, it automatically changes to something like:
```38.40117T``` which is completely not a readable time format and it messes up my whole graph.
Here is another answer: Unfortunately it seems that this is an open issue in Plotly, and timedelta64 seems to not be a supported format.
One possible workaround would be to convert this particular column of the dataframe into a format containing date and time: ```df_exec['exctn_tim'] = df_exec['exctn_tim'] + pd.to_datetime('1970/01/01')```
|
Title: -e option in sed
Tags: sed
Question: I saw someone do this to replace "upon" with "eat" in a text file called ```oro```.
```sed -i -e "s|upon|eat|" oro
```
Question: what is the purpose of -e option in the above command? I tried with and without the -e option and observed no differences.
Comment: @yaegashi - if it is a duplicate, I don't think it should matter - there aren't any answers at that link which answer the question. There are a few which talk about ways to handle it - but not a one which discusses its purpose - especially in the context of `-i`. Ok, there was one - a good one - short and sweet - but not regarding `-i`,
Comment: @lcd047 - you're wrong about `-i` - and none of those have to be terminated by newlines either. `-e` does not allow you terminate it with a newline - a newline does that.
Comment: @lcd047 - i know how it works - but it doesn't work with newlines, and they don't need terminating newlines, either. At least, not in this case. There are two ways to terminate those commands portably - one you've just demonstrated, and the other is a newline. Except that's not the only way to do it the way you have just done.
Comment: @lcd047 - maybe [look here](http://unix.stackexchange.com/q/148551/52934).
Comment: @lcd047 - But you didn't. You did as those answers do and talked about *where* it's needed. I know where it's needed - and i also know why. I don't believe you do - and those answers don't discuss it either. It's not a big deal - but you're not helping your case by confirming my complaint. And, yes, i know there was no `-i` in sight, but there is one the question here, and it is not unrelated.
Comment: @lcd047 - I didn't call you any names - I certainly don't discriminate against people based on their understanding of edgecase `sed` parsing rules. And anyway, just that you know *where* it's needed is far more than most, so if I did - and if ever it was in any danger of affecting my opinion of you - it would have *positively* done so. But it didn't - and never would have. Yes, I don't believe you do - I'm sorry if that offends you. But I am done talking with you about it. If you wish to have some answers from my perspective - you can follow the link. Or not. I don't care.
Comment: @lcd047 - Yeah, that is, in a roundabout way, why. The `-e` command is a script - as is `-f` - they can be intermingled. Anyway, GNU writes its sources according to a standard - which is far clearer on the matter. The terminator in both cases is a null read - the end of the script, EOF. It's not a newline, but that is a pedantic argument to make on my part anyway. Still, the answers linked don't come anywhere close to discussing that. By spec: *The application shall not present a script that violates the restrictions of a text file except that the final character need not be a newline.*
Comment: @mikeserv [This answer](http://unix.stackexchange.com/a/33160/111878) explains it pretty well: you can't have `sed '/foo/i\bar; /fred/a\barny; /harry/c\potter'`, since `i`, `a`, and `c` have to be terminated by newlines. Using `-e` allows you to pass `i`, `a`, and `c` on the command line. `-i` is completely irrelevant to this issue.
Comment: @mikeserv Please try this: `echo foo | sed -e '/foo/a\' -e 'bar'`
Comment: @mikeserv My point was to show you why `-e` is needed. `echo foo | sed '/foo/a\bar'` doesn't work, while `echo foo | sed -e '/foo/a\' -e bar` does (with no `-i` in sight, either). I wasn't trying to find all (or any) ways to make `a\ ` work, portable or not.
Comment: @mikeserv _I don't believe you do_ - Am I allowed to call you names in response, then? :) _And, yes, i know there was no `-i` in sight, but there is one the question here, and it is not unrelated._ - Then please show how it's related. So far you (1) have rejected my explanation without giving any reason, and (2) started a pissing contest. Surely you can do better than that?
Comment: @mikeserv Ok, so to answer your question _why_: `-e` compiles its arguments as a string (`compile_string()` in GNU `sed` sources), while `-f` and `'...'` compile their argument as a program (`compile_file()` in GNU `sed`). This means `-e` terminates commands, which is why `sed -e '/foo/a\' -e bar` works, and `sed '/foo/a\' -e bar` doesn't. In the original question `-e` doesn't make any difference because `s` doesn't care whether something follows or not. Have a good day, too.
|
Title: Why is my NULL count returning 0 in MySQL? Very basic
Tags: mysql
Question: I have an 2 easy questions that are puzzling me (I am new to Mysql)
I have a table (survey) with 3 columns, UserID (primary), SkinColor, and HairColor. I wanted to count the number of people who have a hair color that is null (they didn't answer the survey).
I did this with:
```select count(id)
from survey
where haircolor is null
```
But I was puzzled why I couldn't do
```select count(haircolor)
from survey
where haircolor is null
```
It returned 0. Why is this?
Question 2: I want to return the total number of survey respondents (all ID's), number of people with null value for hair color, and null value for skin color. I tried this query
```select count(id), count(skincolor), count(haircolor)
from survey
where skincolor is null and haircolor is null
```
But that just returned the the count where skincolor and haircolor are both null obviously, and not an individual count for each column. Is there a way to put the WHERE constraint up in the SELECT section so I can specify different constraints for each select?
Thanks!
Comment: Q1: Use `select count(1) from ...`, it's faster and doesn't have the null problem. Q2: It seems to me that you have the germ of the answer to question 2 in your first question. How about `select count(1), count(skincolor), count(haircolor) from survey;`?
Here is another answer: for the second case:
```select 'No haircolor', count(id)
from survey
where haircolor is null
union all
select 'no skincolor',count(id)
from survey
where skincolor is null
```
this should return something like:
```No haircolor 3
No skincolor 1
```
If you want the result as a single line you'll need to do a pivot on the result above.
Note: I'm assuming String literals are quoted with single quotes as in MsSql
Here is another answer: As for question 2:
It is easy enough, since COUNT(null) is 0:
``` select count(id), count(skincolor), count(haircolor) from survey
```
This gives you the number of non-nulls for each column.
You should also be able to do something along the lines of:
```SELECT count(1) AS num,
count(1)-count(skincolor) AS nullcolor,
count(1)-count(haircolor) AS nullhair
FROM survey
```
Here is another answer: I did this with:
```
select count(id)
from survey
where haircolor is null
```
But I was puzzled why I couldn't do
```
select count(haircolor)
from survey
where haircolor is null
```
It worked the first time because the first time you are counting the ID's where X is null and the IS's themselves are not null. The second time you are counting the things that are themselves NULL and the COUNT of things that are NULL is 0.
|
Title: Ubuntu 17.10 login screen wallpaper
Tags: appearance
Question: I like the wallpaper of Ubuntu 17.10 login screen and I would like to set it as my main wallpaper, too. Can you tell me where it is located in my files? Thanks in advance.
Comment: Isn't it just some tiled colour and texture?
Here is another answer: Using Nautilus navigate to ```/usr/share/backgrounds```:
You will find more images in the subdirectories. For example gnome has many wallpapers and yours might be in it.
|
Title: How to display a spacial code (e.g., triangular bullet: \u2023) n times with c# Console.WriteLine()
Tags: c#;unicode;console
Question: How can I display spacial codes in c# ```Console.WriteLine()```?
Given: ```symbol = '\u2023';```
The escape sequence ```\u2023``` should display ```‣``` on console in c#. So, How can I get output for the symbol as ```'‣'``` and not ```'?'```
What I get is just a wrong symbole ```'?'``` instead.
```public void DisplayPattern (int n, char symbol)
{
string pattern = "";
for (int i = 0; i < num; i++)
{
pattern = new String(symbol, i);
Console.WriteLine(pattern);
}
}
```
Comment: Console can only show ASCII characters.
Comment: it should be possible by using: // The encoding:
UnicodeEncoding unicode = new UnicodeEncoding(); but I dont know how to develop my function to get the output right!
Comment: Thanks for the suggestion. However, I used this method but the question marks turned in to squares not triangle!
Comment: Yes, It works. thank you very much, indeed!
Comment: Does this answer your question? [How to write Unicode characters to the console?](https://stackoverflow.com/questions/5750203/how-to-write-unicode-characters-to-the-console)
Comment: @AliSafari change the font. MS Gothic shows the triangle for me.
Here is the accepted answer: Set the console encoding
```Console.OutputEncoding = System.Text.Encoding.UTF8;
Console.Write('\u2023');
```
now it depends if you use a font that supports that character. ```Consolas``` doesn't.
Comment for this answer: Do you know how can I change my console font?
Comment for this answer: Sorry @fubo, where I have to do the right click? If you mean on cmd.exe? then I don't see any font tab on this app.
Comment for this answer: @AliSafari the link to the duplicate question I posted shows you how to change the font
Comment for this answer: @AliSafari right, change the font of your console like Crowcoder mentioned. (right click on console bar, properties, font, Gothic)
Here is another answer: So with help of you guys I write the answer down:
As @fubo said:
first add:
```Console.OutputEncoding = System.Text.Encoding.UTF8;
```
then choose an appropriate font like what @Crowcoder said: MS Gothic will turn all the question marks into triangle bullets.
|
Title: How to avoid infinite loops in the .NET RegEx class?
Tags: .net;regex;infinite-loop
Question: Got a simple task to get a XPath expression and return a prefix that matches the parent of the node that (might be) selected.
Example:
```/aaa/bbb => /aaa
/aaa/bbb/ccc => /aaa/bbb
/aaa/bbb/ccc[@x='1' and @y="/aaa[name='z']"] => /aaa/bbb
```
Because the patterns inside the square brackets might contain brackets within quotes, I decided to try to achieve this with the use of regular expressions. Here's a code snippet:
```string input =
"/aaa/bbb/ccc[@x='1' and @y=\"/aaa[name='z'] \"]";
// ^-- remove space for no loop
string pattern = @"/[a-zA-Z0-9]+(\[([^]]*(]"")?)+])?$";
System.Text.RegularExpressions.Regex re =
new System.Text.RegularExpressions.Regex(pattern);
bool ismatch = re.IsMatch(input); // <== Infinite loop in here
// some code based on the match
```
Because the patterns are rather regular, I looked for '/' followed by indentifier followed by an optional group that matches at the end of the string (....)?$
The code seemd to work but playing with different values for the input string, I found that by simply inserting a space (in the location shown in the comment), the .NET IsMatch function gets into an infinite loop, taking all the CPU it gets.
Now regardless of whether this regular expression pattern is the best one (I had more complex but simplified it to show the problem), this seems to show that using RegEx with anything not trivial may be very risky.
Am I missing something? Is there a way to guard against infinite loops in regular expression matches?
Comment: in general, isn't that equivalent to the halting problem?
Here is the accepted answer: Ok, let's break this down then:
```Input: /aaa/bbb/ccc[@x='1' and @y="/aaa[name='z'] "]
Pattern: /[a-zA-Z0-9]+(\[([^]]*(]")?)+])?$
```
(I assume you meant \" in your C#-escaped string, not ""... translation from VB.NET?)
First, /[a-zA-Z0-9]+ will gobble up through the first square bracket, leaving:
```Input: [@x='1' and @y="/aaa[name='z'] "]
```
The outer group of (\[([^]]*(]"")?)+])?$" should match if there is 0 or 1 instance before the EOL. So let's break inside and see if it matches anything.
The "[" gets gobbled right away, leaving us with:
```Input: @x='1' and @y="/aaa[name='z'] "]
Pattern: ([^]]*(]")?)+]
```
Breaking down the pattern: match 0 or more non-] characters and then match "] 0 or 1 times, and keep doing this until you can't. Then try to find and gobble a ] afterward.
The pattern matches based on [^]]* until it reaches the ].
Since there's a space between ] and ", it can't gobble either of those characters, but the ? after (]") allows it to return true anyway.
Now we've successfully matched ([^]]*(]")?) once, but the + says we should attempt to keep matching it any number of times we can.
This leaves us with:
```Input: ] "]
```
The problem here is that this input can match ([^]]*(]")?) an infinite of times without ever being gobbled up, and "+" will force it to just keep trying.
You're essentially matching "1 or more" situations where you can match "0 or 1" of something followed by "0 or 1" of something else. Since neither of the two subpatterns exists in the remaining input, it keeps matching 0 of [^]]\* and 0 of (]")? in an endless loop.
The input never gets gobbled, and the rest of the pattern after the "+" never gets evaluated.
(Hopefully I got the SO-escape-of-regex-escape right above.)
Comment for this answer: The problem here is that this input can match ([^]]*(]")?) an infinite of times without ever being gobbled up, and "+" will force it to just keep trying. WHY ????
Comment for this answer: Encountered this myself with a complex pattern. Sure makes me want to never use Regex again. You have something that can have a landmine in it, and not because of a bug. With a sufficiently complex pattern, you're never quite sure if that landmine will strike.
Comment for this answer: I think the differences you saw were due to the different dialects of regex, not fancy infinite-loop-detection in other engines.
The core problem is wrapping something that can match *empty text* an infinite number of times. Anything that's a variation of (x?)+ or (x?)* could be dangerous given the right input. Refactoring your pattern should allow you to get what you need without creating a potential for an infinite loop.
Regardless of the language, the lesson is to always program defensively against arbitrary user input.
Comment for this answer: Well that was productive (to me) - thanks Richard.
I conclude that:
1. Getting a regex pattern from an external source is dangerous and can easily hose an application
2. That the regex in .NET does not detect infinite loops and is also not providing a way to limit processing
3. That different regex engine can give different results so even if the syntax is the same, some semantics may be different (portability note)
Thanks.
Here is another answer: To answer the original question (i.e. how to avoid infinite loop with regex), this has become easy with .Net 4.5 as you can simply pass a time out to the Regex methods. There is an internal timer that will stop the regex loop when the timeout expires and raise a RegexMatchTimeoutException
For example, you would do the following
```string input = "/aaa/bbb/ccc[@x='1' and @y=\"/aaa[name='z'] \"]";
string pattern = @"/[a-zA-Z0-9]+(\[([^]]*(]"")?)+])?$";
bool ismatch = Regex.IsMatch(input, pattern, RegexOptions.None, TimeSpan.FromSeconds(5));
```
You can check out MSDN for more details
Comment for this answer: Thanks! - that's certainly useful and relevant info but still the original intent was to understand how to avoid this case to begin with) - it's nice to know that after 5 seconds it will throw an exception but it is much better to write it correctly to begin with...
Here is another answer: It shows that using code with anything not trivial can be risky. You created code that can result in an infinite loop, and the RegEx compiler obliged. Nothing new that hasn't been done since the first 20 IF X=0 THEN GOTO 10.
If you're worried about this in a particular edge case, you could spawn a thread for RegEx and then kill it after some reasonable execution time.
Comment for this answer: I find this answer contra productive. Other RegEx engines I tried didn't get into an infinite loop (try, for example, the online JavaScript RegEx tester at http://www.regular-expressions.info/javascriptexample.html and you'll see it works just fine).
This regular expression is simple enough and I do not find it trivial that its _expected_ failure mode (when no match is found) is an infinite loop.
The thread idea is not useful either. Should I use this idea anywhere an external RegEx is provided? I don't think so. I think this is probably a bug in the RegEx (or else a big gaping hole).
Here is another answer: ```
The problem here is that this input can match ([^]]*(]")?) an infinite of times without ever being gobbled up, and "+" will force it to just keep trying.
```
That's one hell of a bug in .NET's RegEx implementation. Regular expressions just don't work like that. When you turn them into automata, you automatically get the fact that an infinite repetition of an empty string is still an empty string.
In other words, any non-buggy regex engine will execute this infinite loop instantly and continue with the rest of the regex.
If you prefer, regular expressions are such a limited language that it's possible (and easy) to detect and avoid such infinite loops.
|
Title: Netbeans 14 won't open on mac
Tags: java;netbeans
Question: I just purchased a new computer (mac) on mac os Monterrey.
Neither my old netbeans 11.3 transferred from my old computer or the
newest version netbeans 14, I just installed, seems to work. I
reinstalled java and JDK 17 and restarted and still the same issues.
Netbeans 14 won't open at all; just starts opening and quits
after a few seconds. Netbeans 11 is stuck on initializing any project
forever. Netbeans 14 opened briefly until I allowed it to load some
modules on netbeans 11 on to it. After restarting it no longer even
opens!
This is so frustrating! How can I access all of my projects? I am just not an expert enough to figure this out and no one else seems to have problems with netbeans 14.
Comment: [1] Problems associated with starting NetBeans are written to the NetBeans log file. That log is named **messages.log**, and it is a simple text file. It will reside in a directory named **var** under your _user_ directory. In my case, on Windows 10, its path is _C:\Users\johndoe\AppData\Roaming\NetBeans\10.0\var\log_, though obviously it will be different for you. Update your question (only) with any messages appended to the log when you attempt to start NetBeans, being sure to specify the version of Java being used....
Comment: [2] You can explicitly specify the version of Java you want NetBeans to use in the file **netBeans.conf**. See [this answer](https://stackoverflow.com/a/54770359/2985643) for details on how to do that. I'm not a Mac user, but my understanding is that Apple sometimes bundles Java with the O/S, so there may be a conflict between the Apple installed Java, and the version you want to use. Explicitly specifying the JDK path of choice in **netBeans.conf** will resolve that potential issue. [Here's an (old) example](https://stackoverflow.com/a/40869130/2985643) of editing **netbeans.conf** on a Mac.
Comment: I cannot find said messages. log. I have a folder named var but no file called messages. log and nothing pops up under finder either
Comment: I thought from reading around here that it was probably an issue with netbeans not finding java which is why I redownloaded it and I tried to specify it’s location via similar examples. It doesn’t do anything. Maybe because I am new to bash I am doing something wrong. My version of JDK is jdk-18.02.jdk so I just change it to that version. I saw that these files from the path actually exist on my computer so it should be working.
Comment: This what I tried based on the post you suggested (I tried some other variations as well also based on some other posts): `$ vi /Applications/NetBeans/NetBeans\ 14.app/Contents/Resources/NetBeans/etc/netbeans.conf
[...]
#netbeans_jdkhome="/Applications/NetBeans/NetBeans 14.app/Contents/Resources/NetBeans/bin/jre"
netbeans_jdkhome="/Library/Java/JavaVirtualMachines/jdk18.0.2.jdk /Contents/Home/`
Comment: I also tried completely uninstalling and installing netbeans and clearing the cache. No effect. I don't know if any others have this much issue with netbeans. I am thinking it is just too glitchy and might need to try out another IDE.
Comment: worked using omniremover https://www.minicreo.com/mac-uninstaller/uninstall-netbeans-mac.html and reinstalling. Not sure why the first install didn't work well. I did exactly the same thing. These programs are as finicky as my cells.
|
Title: Can I create int.FromBytes(byte[] bytes) extension method in C#?
Tags: c#;extension-methods;c#-7.3
Question: Can I create int.FromBytes(byte[] bytes) extension method in C#?
I need something with usage like this:
```int a = int.FromBytes(new byte[]{1,2,3,4});
```
I'm using C# 7.3.
Actually I need this for ushort data type. For now I'm using extension method like this:
```public static ushort FromBytes(sbyte msb, byte lsb)
{
ushort usmsb = (byte)msb;
ushort uslsb = lsb;
return (ushort)((usmsb << 8) + uslsb);
}
```
I'm using it like this:
```ushort x = Helpers.FromBytes(1, 2);
```
I can't answer my closed question, so I post it here. This is how I did it and what I needed:
```// two byte tuple extension
public static ushort ToUShort(this (byte msb, byte lsb) bytes)
{
ushort usmsb = bytes.msb;
ushort uslsb = bytes.lsb;
return (ushort)((usmsb << 8) + uslsb);
}
```
Usage:
```byte byte1 = 32;
byte byte2 = 42;
ushort result = (byte1, byte2).ToUShort();
```
This is much better than extension for byte[] because you can't pass wrong number of bytes.
Comment: @ScottHunter Nothing happened. I don't know the syntax to create extension like this.
Comment: @Han RIght. I have extension method to convert it back. I messed up my question.
Comment: @RenéVogt I know about BitConverter, byt my question is about language.
Comment: @Flydog57 This is exactly what I want, I have this for conversion back, but I didn't thought about this this obvious way :) Can you post this as an answer?
Comment: Your FromBytes() is not an extension method. It's just a static method.
Comment: You should really read the duplicate question before stating that your question isn't a duplicate, given that it's asking for exactly what you claim to be asking, and gives you exactly the answer you claim you want.
Comment: That FromBytes method is (as shown) a static method on int. You can't currently write static extension methods. You could write an extension method on byte arrays (or even `IEnumerable`. Something like `new byte[] {1, 2, 3, 4}.ToInt()`)
Comment: You don't need to, it's already [there](https://docs.microsoft.com/de-de/dotnet/api/system.bitconverter.toint32?view=netframework-4.8)
Comment: What happened when you tried?
Here is another answer: Yes, you can map to the existing static method with your own static extension method, if you think it is worth the trouble.
```static public class ExtensionMethods
{
static public int ToInt32(this byte[] source)
{
return BitConverter.ToInt32(source, 0);
}
}
```
|
Title: Why is the Southern Hemisphere of Mars is heavily struck by asteroids while the northern part is relatively smooth?
Tags: mars;planetary-science;asteroid;perseverance
Question: The northern hemisphere of Mars is smooth with flats lands but areas particularly below the Tharsis rise is heavily cratered, also Hellas basin, one of the largest impact craters is located in the Southern Hemisphere. My question is WHY only the Southern Hemisphere?
Comment: [This question on the astronomy SE might be of help](https://astronomy.stackexchange.com/q/26349)
Comment: Instinctively thought because of different ages, craters possibly hidden under lava just like on the moon. But when starting to search I realized it may be more complicated, possibly involving a huge impact. Sounds familiar :-) ? Am looking forward to the answer !
Comment: Don’t have time for a referenced answer, but I believe the popular theory is that a single huge impact basically left the Northern hemisphere as one huge crater wiping out all cratering From earlier smaller impacts.
Comment: It seems the northern basin (Borealis basin) hypothetically may be a combination of a huge impact and subsequent massive volcanism that may covered post-impact craters ...
Comment: This is just an idea. If Mars once had an ocean in its northern hemisphere, as is speculated, any meteorites that would have hit the northern hemisphere during this period may not have survived hitting the surface of the ocean. In hitting the ocean much of the kinetic energy any meteorites had would have been lost during impact & thus would not have formed impact craters on the ocean floor.
|
Title: Blocking in classloading while quering hazelcast
Tags: classloader;hazelcast
Question: We are using hazelcast as a distributed cache. After the application runs for a certain time we start to get blocking in classloading. Following is the stacktrace :
java.lang.Thread.State: BLOCKED (on object monitor) at java.lang.ClassLoader.loadClass(ClassLoader.java:404) - locked <
0x00002acaac4c4718> (a java.lang.Object) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:124) at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:97) at com.hazelcast.nio.IOUtil$1.resolveClass(IOUtil.java:113) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371) at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(DefaultSerializers.java:196) at com.hazelcast.nio.serialization.StreamSerializerAdapter.toObject(StreamSerializerAdapter.java:65) at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:260) at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:186) at com.hazelcast.map.impl.AbstractMapServiceContextSupport.toObject(AbstractMapServiceContextSupport.java:42) at com.hazelcast.map.impl.DefaultMapServiceContext.toObject(DefaultMapServiceContext.java:28) at com.hazelcast.map.impl.proxy.MapProxySupport.toObject(MapProxySupport.java:1038) at com.hazelcast.map.impl.proxy.MapProxyImpl.get(MapProxyImpl.java:84)
Hazelcast is loading the class every-time it de-serializes an object. I am not sure why classloading is required each time.
Can somebody please help.
Comment: how did you address this issue ?
Here is another answer: This is not Hazelcast specific, whenever you create an instance you have to ask the classloader for the class, no matter you use reflection or a new call. The problem is actually when synchronized classloaders come into the play (like in webapps or stuff). Hazelcast, obviously, has to deserialize a lot and therefore requests a lot of classes.
The internal deserialization is kind of optimized by now (by caching the conctructor instances - as far as I remember) but the Java standard serialization (the one you use) always wants the class and classes aren't yet cached.
Comment for this answer: There is no way around it, as I mentioned, this is a Java issue and not specific to Hazelcast. You can only hack / wrap the ClassLoader and implement a caching strategy on your own. Afterwards you pass the wrapping ClassLoader instance to Hazelcast's config as the config classloader.
Comment for this answer: The problem with that approach is (and the reason it wasn't merged), it won't work for certain containers. Especially containers like OSGi and application servers are a burden to keep working. The constructor cache was kind of the middle way to go but it is not a perfect solution, I agree. The problem is, the whole classloading thing needs to be rewritten to make it more adjustable and less complex - providing support for further frameworks always break another one.
Comment for this answer: Maybe it changed by now but trying it in the past I had lots of weird memleaks
Comment for this answer: so what do you suggest to get out of this problem? I've started facing the same issue and as you mentioned, there is a synchronized block in classloader.
Comment for this answer: Ok. Instead of wrapping the class loader, I kind of replicated the `ConstructorCache` class in `ClassLoaderUtil` in hz to create `ClassCacheMap`, which will cache the classes. Later I saw that almost identical changes were done by 'Mr.Easy' too against some PR, but never got integrated with hz codeline (I presume). I hope that it might give me some good results.
I'm about to run some tests with the patched jar now.
Comment for this answer: please let me know if you see any problems with approach.
Comment for this answer: Hi noctarius, I understood the second part where you said that whole class loading code in runtime needs to be re-written. However, in the first part, what is the problem that you faced with OSGI container. I'm running the modified code in an OSGI container and my preliminary tests shows that it is working fine. So could you please elaborate on the problem that you faced so that I can get a head start there ?
Comment for this answer: Thanks @noctarius, I'll keep an eye on memory leaks in the system, going forward.
|
Title: No response in SQSMessageSuccess while detecting faces inside a video uploaded on Amazon s3
Tags: java;amazon-web-services;amazon-s3;amazon-sqs
Question: I had been trying to detect faces from a video stored on Amazon S3, the faces have to be matched against the collection that has the faces which are to be searched for in the video.
I have used Amazon VideoDetect.
My piece of code, goes like this:
```CreateCollection createCollection = new CreateCollection(collection);
createCollection.makeCollection();
AddFacesToCollection addFacesToCollection = new AddFacesToCollection(collection, bucketName, image);
addFacesToCollection.addFaces();
VideoDetect videoDetect = new VideoDetect(video, bucketName, collection);
videoDetect.CreateTopicandQueue();
try {
videoDetect.StartFaceSearchCollection(bucketName, video, collection);
if (videoDetect.GetSQSMessageSuccess())
videoDetect.GetFaceSearchCollectionResults();
} catch (Exception e) {
e.printStackTrace();
return false;
}
videoDetect.DeleteTopicandQueue();
return true;
```
The things seem to work fine till StartFaceSearchCollection and I am getting a jobId being made and a queue as well. But when it is trying to go around to get GetSQSMessageSuccess, its never returning me any message.
The code which is trying to fetch the message is :
``` ReceiveMessageRequest.Builder receiveMessageRequest = ReceiveMessageRequest.builder().queueUrl(sqsQueueUrl);
messages = sqs.receiveMessage(receiveMessageRequest.build()).messages();
```
Its having the correct sqsQueueUrl which exist. But I am not getting anything in the message.
On timeout its giving me this exception :
```software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: sqs.region.amazonaws.com
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97)
Caused by: java.net.UnknownHostException: sqs.region.amazonaws.com
```
So is there any alternative to this, instead of SQSMessage, can we track/poll the jobId any other way ?? Or I am missing out on anything ??
Here is another answer: The simple working code snippet to receive SQS message with the valid sqsQueueUrl for more
```ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(sqsQueueUrl);
final List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
for (final Message message : messages) {
System.out.println("Message");
System.out.println(" MessageId: " + message.getMessageId());
System.out.println(" ReceiptHandle: " + message.getReceiptHandle());
System.out.println(" MD5OfBody: " + message.getMD5OfBody());
System.out.println(" Body: " + message.getBody());
for (final Entry<String, String> entry : message.getAttributes().entrySet()) {
System.out.println("Attribute");
System.out.println(" Name: " + entry.getKey());
System.out.println(" Value: " + entry.getValue());
}
}
System.out.println();
```
Comment for this answer: Hi,the first line of code ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(sqsQueueUrl); is not returning me any value,and hence the messages are also not available although I have the queue as well as the jobId and the sqsQueueUrl specified is correct.
Comment for this answer: Its showing Messages Available (Visible) as 0. What could be the reason of these messages being 0 ?
Comment for this answer: On timeout its giving me this exception: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: sqs..amazonaws.com
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97)
Caused by: java.net.UnknownHostException: sqs..amazonaws.com
Comment for this answer: @Preksha, have you checked the Amazon Management console? In SQS Section -> select your topic -> Details -> Messages Available (Visible) and check the messages are present or not. Because if the sqsQueueUrl is correct and the region values are correct, then definitely there is no change of any hiccups in the pipeline communication
Comment for this answer: You can configure the Amazon SQS message retention period to a value from 1 minute to 14 days. The default is 4 days. Once the message retention limit is reached, your messages are automatically deleted. That's why the reason for "0" messages. Are you sure using https://sqs.{region}.amazonaws.com/{account-id}/{QueueName}? because the error message clearly elucidates that the URL is unreachable. you can make sure to URL by visiting SQS Section -> select your topic -> Details -> URL(your sqsQueueUrl)
|
Title: Core Data Confusion
Tags: objective-c;macos;core-data
Question: So I'm new to Core Data, everything I read says to use it - if you use SQLite you're an evil bad person. But I'm lost on some simple things. I have a bunch of data that will be used to setup a ```NSCollectionView```, this would be relatively simple in SQLite, but I don't want to be an evil bad person. Is there a simple tutorial someplace that I'm missing? I would love to see a example SQL database based app and the same thing with core data.
Something like here is a table structure in SQL, here is the equivalent in core data...
Here is a INSERT script in SQL, here is the equivalent in core data...
Here is a SELECT with a JOIN and a few WHERE statements, here is the equivalent in core data...
Its even the little things that I don't understand.
How do I provide a pre-populated core data system
Where do the core data files live? in the bundle like my SQLite database would?
With an update to the app what do I have to do to update the core data files if they live outside my bundle?
Comment: There's absolutely nothing wrong with using SQLite directly, if it suits your purposes. Here's a real world example: http://inessential.com/2010/02/26/on_switching_away_from_core_data Personally, I found Core Data easier to understand when I stopped thinking about databases, and started thinking about objects I needed to persist in my application. I know that sounds unhelpful, but I don't even think about what the underlying DB might look like when I use Core Data (well, maybe a bit).
Comment: Regarding the "evil bad person" bit, this means that if you want to use SQLite directly, don't try to use CoreData as a frontend to SQLite. The fact that SQLite is used internally in CoreData is an implementation detail, and trying to interact with it directly can produce unintended consequences. (I used to work on a large Cocoa app that used CoreData for storage.)
Comment: You can pre populate the data programatically. This also allows you to switch the underlying data store without it affecting your application. Here is a good tutorial: http://www.raywenderlich.com/934/core-data-tutorial-getting-started
Comment: @bneely - I'm reading that I shouldn't use SQLite at all, even without core data, I should only use core data in new applications. On the other hand I'm reading I should use a SQL script to pre-populate the backend core data database as you can run it in a script build phase to pre-populate the core data backend. So I'm all kinds of confused. I would rather no use SQL at all, but I'm lost on how to do even the littles things in core data.
Here is another answer: Justin808,
No one is an "evil bad person" for not using CD. If you prefer to use SQLite, go for it. It is used by many applications. It is a framework. If SQLite is a technology you are used to, then use SQLite. That said, CD is the Apple encouraged path for building rich, persistent model apps on their platforms. They don't provide many tools for the pure SQL community but provide a very rich set of tools for CD apps. I've attempted to answer the technology question here: Core Data VS Sqlite or FMDB…?
About your request for a line by line comparison between the same app implemented both ways, this sounds like an excellent learning opportunity for you to write one. (I teach beginning iOS programming. The app you're asking for can be quite simple. You can probably write both versions in one weekend. I would be happy to review your work and critique your blog post describing the differences. You could make an excellent contribution to other folks in your situation trying to choose between these two technologies.)
Your questions:
```
Something like here is a table structure in SQL, here is the
equivalent in core data...
```
Schemas are described differently but are substantially similar. That said, an SQL schema may not be tuned for use by CD and/or UI application or vice versa.
```
Here is a INSERT script in SQL, here is the equivalent in core data...
```
There are plenty of examples by Apple and others that tell you how to insert new entities. What is it you don't understand?
```
Here is a SELECT with a JOIN and a few WHERE statements, here is the
equivalent in core data...
```
The predicate language in CD is different than SQL. As such, you will query things differently. In particular, CD is an almost "pure" set theoretic approach to organizing data. You use fetches to seed your "query" and set operation to refine it. Beyond that, I need to direct you to one of the many books about CD and its predicate language.
```
How do I provide a pre-populated core data system
```
CD depends upon a file like every other DB system. You provide it in your bundle and copy it into your documents directory (on iOS) when you need to mutate it.
```
Where do the core data files live? in the bundle like my SQLite
database would?
```
Yes, they do. If you are using CD with a SQLite backing store, then it is just a SQLite DB file. (There is a special issue if you allow CD to store large BLOBs in the file system.)
```
With an update to the app what do I have to do to update the core data
files if they live outside my bundle?
```
I'm not sure what you are asking here? If you update your schema between versions, just as with SQLite, you will need to migrate your database to the new schema. CD provides some tools that work very well for additive migrations.
Good luck with your choice.
Andrew
|
Title: Find children objects missing parents objects using LINQ
Tags: c#;.net;linq
Question: I have a set of collection built in such a way that every object in the "children"(possible_child) collection should at least have a parent the parent collection (possible_parent). I would like to detect those children with no parent object.
As an example, if i find a child with a given country, year, season and season type without there should at least be a parent record with the same data.
I wrote the following query which i now see is wrong.
```var childrenMissingParents = (from cl in possible_child
join pr in possible_parent on new
{
p1=cl.Country,
p2=cl.Year,
p3=cl.Season,
p4=cl.SeasonType
}
equals
new
{
p1=pr.Country,
p2=pr.Year,
p3=pr.Season,
p4=pr.SeasonType
}
into opr from spr in opr.DefaultIfEmpty()
where spr == null select cr).ToList();
```
Can someone suggest a better idea?
Here is the accepted answer: ```var childrenMissingParents = possible_child.Where(
c => !possible_parent.Any(
p => p.Country == c.Country
&& p.Year == c.Year
&& p.Season == c.Season
&& p.SeasonType == c.SeasonType));
```
Comment for this answer: Both answers solve my problem thousand thanks to both of you.
Here is another answer: If I understood correctly, the following does what you want:
```var orphanedItems = possible_child
.Where(item => !possible_parent.Any(p =>
(p.Year == item.Year) &&
(p.Country == item.Country) &&
(p.Season== item.Season) &&
(p.SeasonType == item.SeasonType)));
```
Comment for this answer: correct and i just chose one of the working solutions. I could have chosen this one as well. Many thnaks
Here is another answer: You could use Where and Any to achieve your goal.
```var childrenWithoutParent = possible_child.Where(child => !possible_parent.Any(p =>
(p.Year == child.Year) &&
(p.Country == child.Country) &&
(p.Season == child.Season) &&
(p.SeasonType == child.SeasonType)));
```
However, you can improve even more the read of your code, you could have a method in your child to compare to the parent:
```public class Child
{
....
public bool IsEqualsTo(Parent parent)
{
return (this.Year == parent.Year) &&
(this.Country == parent.Country) &&
(this.Season == parent.Season) &&
(this.SeasonType == parent.SeasonType)));
}
}
```
This could improve the readability of your query.
``` var childrenWithoutParent = possible_child
.Where(child => !possible_parent.Any(p => child.IsEqualsTo(p));
```
|
Title: can you load external executable javascript from a firefox extension?
Tags: firefox;firefox-addon;firefox-addon-sdk
Question: Does anyone know if there is a way to load any external executable javascript from a firefox add-on extension? I looked into scriptloader.loadSubScript, but it appears that it can only load from a local resource.
Any help would be appreciated.
Here is the accepted answer: You can always xhr for a file, save the contents to disk, then use scriptloader.loadSubScript with an add-on
this would violate the AMO policies though, so you wouldn't be able to upload the add-on to http://addons.mozilla.org
Comment for this answer: @Erich: No, there is no alternative. Your desire to load code that is not in the add-on release is inherently at odds with the security related [email protected]. There is quite a bit in place to specifically prevent accidentally doing what you desire because running unknown code (sourced outside the add-on bundle) with extension-level privileges is inherently a security issue.
Comment for this answer: Is there an alternative that would not violate the AMO policies?
Here is another answer: As @erikvold already pointed out, doing so would be a security hazard AND it also violates AMO rules (because it is a security hazard).
Consider your server gets compromised, or there is a way to MITM the connection retrieving the remote script (TLS bugs anyone :p), or you sell your domain and the new owner decides to ship a script to collect credit card information straight from a user's hard disk...
However, it is possible to run a remote script in an unprivileged environment, much like it would run in a website.
Create a ```Sandbox```. The Sandbox should be unprivileged, e.g. pass an URL in your domain into the constructor.
Retrieve your script, e.g. with XHR.
Evaluate your script in the Sandbox and pull out any data it might have generated for you.
This is essentially what tools like Greasemonkey (executing user scripts) do.
Creating and working with Sandboxes in a secure fashion is hard, and the Sandbox being unprivileged prohibits a lot of use cases, but maybe it will work for your stuff.
Comment for this answer: @Erich, Thanking somebody for answers on stackoverflow usually involves upvoting the answer (you can upvote more than one).
Comment for this answer: Thank you for the detailed answer.
Here is another answer: Try using Components.utils.import .
Example :
const {Cc,Ci,Cu} = require("chrome");
Cu.import("url/path of the file");
Note :
js file which uses DOM objects like window, navigator, etc. will return error saying "window/navigator is undefined". This is simply because the main.js code does not have access to DOM.
Refer this thread for more information.
Comment for this answer: This does not work for external Javascript, which is what the question asked.
Comment for this answer: I've tried this but I can't figure out how to get an external URL to work. All the documents I read about it show either a chrome:// or resouce:// url that is referring to fles that already exist in the project. Is it possible to grab a javascript file from a web site when the extension initializes?
|
Title: Simple C++ web server with php support
Tags: php;c++;http
Question: im working on a simple C++ HTTP server as a school project and I would like to add php support for it. Post and Get methods should not be a problem, but Im stuck on a cookies. I googled for long and couldnt find, how php handles cookies, where it gives the output for http server such as Apache or how does it work in global.
Any ideas how I could print this code:
```<?php
setcookie("cookie[three]","cookiethree");
?>
```
to console so it can be read by my server and after some parsing(?) sent to a client?
Thanks guys
EDIT:
This is really close example to what I need, but when I execute the script it shows empty array..
http://php.net/manual/en/function.headers-list.php
php version:
PHP 5.3.6-13ubuntu3.2 with Suhosin-Patch (cli) (built: Oct 13 2011 23:09:42)
Copyright (c) 1997-2011 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies
Comment: @Justinᚅᚔᚈᚄᚒᚔ ups , thanks for warning
Comment: Check this tutorial [phpcookies](http://www.tizag.com/phpT/phpcookies.php)
Comment: Dafarov: i guess u did not understand my post. basically i want to be able to get the cookie header code just running php script in console so i can read it by mu http server and send to user
Here is the accepted answer: PHP get its superglobals variables (such as ```Cookies```) from the HTTP server itself. When you parse a client request, you must store every key/value pair in an appropriate container (an ```HTTPRequest``` class perhaps).
When interfacing your server with PHP you should write a module like apache does (```mod_php```). To do this, you will have to write your own API for interfacing with the modules. This means for every module you'll have (php, python ...) you will have the same interface for your Inputs/Outputs.
When writing such an API, you should define an easy way to pass all the superglobals variables PHP needs from the server. I've written my own HTTP server for the same purpose and the documentation of PHP is a little tricky about this point but you can inspire yourself from ```PHP-CGI``` : there is a ```php.exe``` or simply ```php``` command on Linux/Windows which can take arguments such as variables if my memory is good. Anyway, there are several ways to pass these arguments to php and I used ```CGI``` for my server.
Hope that'll help you.
Comment for this answer: You should first take a look at how Apache interfaces with PHP (through SAPI, Server Application Programming Interface for example). Also look at FastCGI, what it does and how it is used to communicate between PHP and apache and how you could use it in your web server.
Comment for this answer: Look at http://fuzzytolerance.info/cgi-vs-sapi-vs-fastcgi/ for an explanation about the two and a listing of the differences between them.
Comment for this answer: thanks, probably most helpful answer so far :) I tried to override setcookie with echo inside, but thats not possible :/ I will look for more info about the running CGI parameters, but If you could remember how it was done(what params u used) please do so, I would be really thankful:) thanks
Here is another answer: The way cookies work is that the server sends a ```Set-Cookie``` header:
```HTTP/1.0 200 OK
Set-Cookie: myCookieName=myCookieKey
Set-Cookie: anotherCookie=anotherValue
// other headers and probably content
```
Then, a compliant HTTP client will send it back in subsequent requests:
```GET /some/path HTTP/1.0
Cookie: myCookieName=myCookieKey; anotherCookie=anotherValue
```
It's way more complicated than that, but that's the basics.
To summarize, you need to:
Send a ```Set-Cookie``` header when your code requests a cookie to be set.
Look for a ```Cookie``` header when you're reading incoming requests.
Comment for this answer: I know I have to set a cookie header, already prepared my code for that.. but How do I know what cookie to set, when I run in terminal: "php script.php" and in the code is smth like setcookie("cookie1","val1"); It does not print the cookie to console, so how am I supposed to get it?
|
Title: How to make a boot profile permanent?
Tags: linux;boot;livecd
Question: I have a Finnix live CD. I can customize it by remastering it. When I boot with the live CD I need to make a little change in the boot profile
The boot profile before making the changes is
```
linux apm=power-off vga=791 initrd=minirt quiet
```
The boot profile after making the change become
```
linux apm=power-off vga=791 initrd=minirt quiet root=/dev/sr0
```
Now, I need to make this change (adding root=/dev/sr0) permanent. How can I do that?
Here is the accepted answer: Find isolinux.cfg in your CD file system (mounted at /mnt/hda1 in your remastering guide).
In this file there is a line like
```APPEND apm=power-off vga=791 initrd=minirt quiet
```
edit it like you want before launching stage1 and stage2 scripts.
|
Title: Problems trying to use Monodevelop on Manjaro
Tags: c#;ide;monodevelop;archlinux;manjaro
Question: I can't get ```monodevelop``` to work on ```Manjaro```. I tried all the different options to install in ```AUR``` (```monodevelop-beta```, ```monodevelop-bin```, ```monodevelop-git```, ```monodevelop-nuget3```, ```monodevelop-stable```) and most of them failed to build.
The only one that didn't fail was ```monodevlop-bin```, but I couldn't get it to build my program. At first I got this error:
```The imported project "/usr/lib/mono/msbuild/15.0/bin/Roslyn/Microsoft.CSharp.Core.targets" was not found.
Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. (MSB4019)
```
There was no folder "Roslyn" in the specified location. I made a folder with that name and copied the file ```Microsoft.CSharp.targets``` from ```msbuild/15.0/bin/``` in the "Roslyn" folder (renamed the copy to ```Microsoft.CSharp.Core.targets```).
It probably wasn't a good idea, but I tried it, just in case.
That did seemed to work, but then I got another error:
```Parameter "AssemblyFiles" has invalid value "/usr/lib/mono/4.7-api/mscordlib.dll".
Could not load file or assebly 'System.Reflection.Metadata, Version=2233765295, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a or' or one of its dependencies. (MSB3248)
```
Here is the accepted answer: ```
doctorzeus commented on 2019-12-14 14:34
Since the issues with compatibility with the latest version of mono and msbuild as well as
there no longer being a "stable" build mode on the github project I am disowning this project.
Sadly I stopped using this IDE in favor of VSCode a while ago and also no longer have the time to maintain it with the large number of incompatibilities with the various dependencies.
Hopefully someone with more time will take over..
```
This is from the original maintainter/builder (at the time of writing) in the comments of the AUR page. After him, it seems that another person tried to continue maintaining it, but he still failed:
```
coder2000 commented on 2020-03-31 18:51
The build process is currently broken and the documentation is incomplete.
```
So, for now, you can't use ```monodevelop``` on ```Arch``` based systems (at least from ```AUR```). You will have to use alternative IDEs, like JetBrains Rider, or text editors (with proper extensions), like VSCode or Atom.
Also, from personal experience, I would advice against using ```Arch``` based distributions for programming, because of problems like that. Use ```Ubuntu``` or something ```Ubuntu``` based for easiest experience with programming tools (I use ```Linux Mint``` for programming and I haven't had any major problems).
|
Title: Do not checkout certain file types in TortoiseSVN
Tags: svn;tortoisesvn;svnignore
Question: I've read a lot of instruction on how to ignore files/folders in TortoiseSVN, but they all seem to address the situation of you committing something, and wanting to exclude items from that commit.
My problem is, I want to prevent certain file types from being downloaded during updates.
There are certain files that are only of interest to our art department (and, unfortunately, they're huge), which I don't want to clutter my harddrive with. But other files, in the same tree, I do need (so I cannot just exclude a whole branch).
So, is there a way to prevent certain extensions from being downloaded in the first place?
Comment: If your files are distinguishable by directory you can do a [sparse checkout/update](http://svnbook.red-bean.com/en/1.8/svn.advanced.sparsedirs.html). But that mechanism does not allow for selection by extension.
Here is another answer: You can't ignore certain extensions, but if all of the art files you want to ignore are in a single folder you can ignore the entire folder hierarchy. The only way I know how to do this is by creating a new checkout though.
In the checkout dialog there is a button that says "Choose items...", which opens a simplified repo browser dialog wherein you can choose exactly which folders and files are checked out. Every time you update it will not pick up anything that you leave unchecked in this dialog. So if you want to ignore all art in the "art" folder you would leave that folder unchecked so that the existing files are not checked out. Anything that gets added in the future will be ignored as well so you have to be absolutely sure you don't want anything that might get added later.
Here is another answer: It appears that SVN doesn't provide a method for ignoring versioned files during a checkout/update.
Here is a quote from: http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-ignore.html
Ignoring Versioned Items
Versioned files and folders can never be ignored - that's a feature of Subversion. If you versioned a file by mistake, read the section called “Ignore files which are already versioned” for instructions on how to “unversion” it.
Now a suggestion would be to structure your checkout in a way that you can more selectively do an SVN checkout that doesn't contain the files that clutter the hard drive.
For instance --
repo structure:
/artProjectA/clientFiles
/artProjectA/adminFiles
So then,
For user's machine checkout: /artProject/clientFiles
For your machine checkout: /artProject/
Hope this helps.
Comment for this answer: Well, a restructuring is not gonna happen.
We got about 200GBs worth of data, and nearly 100 people accessing it. So they're not going to shuffle everything around, just because I want to save some harddrive space... ;)
|
Title: Will leaked memory always remain lost for the duration of C/C++ program?
Tags: c++;c;memory;memory-management;operating-system
Question: The following code has an obvious memory leak:
```void Memory_Leak(void);
void Lots_Of_Other_Stuff(void);
int main(){
Memory_Leak();
Lots_Of_Other_Stuff();
}
void Memory_Leak(void)
{
int *data = new int;
*data = 15;
return;
}
void Lots_Of_Other_Stuff(void){
//allocates/deletes more memory
//calls functions
//etc..
return;
}
```
For the duration of the program, can the memory ever be recovered?
Can the program write over lost memory, and reach a state where no memory has been lost?
Can the Operating System recover it while the program is still running?
Comment: It depends on the OS, short answers, I'm pretty sure it's: no, no, no
Here is the accepted answer: Standard C++ does not have any way to know you're not using the memory anymore.
Some platform-specific mechanisms exist for introspecting the memory heap, usually for debugging, e.g.
http://msdn.microsoft.com/en-us/library/974tc9t1(v=vs.80).aspx
In theory, you could maybe use something like that to take a "snapshot" of the heap state before you ran your ```Memory_Leak()```. Then after it was finished you could look for anything you considered to be a leak and free it. But don't do it. Only mentioning it for thoroughness.
The C++ way of avoiding leaks is to use "smart" pointers instead of "raw/naked/dumb/C-style" pointers. For instance:
```void Memory_Leak(void) // actually, with this change it won't leak anymore...
{
shared_ptr<int> data (new int);
*data = 15;
return;
}
```
The shared pointer is an object with a destructor, so it has an opportunity to run some code when its lifetime is over. That code releases the memory. In this case, the local variable ```data``` ends its life at the return statement, and if that shared_ptr hasn't been copied and stored elsewhere then the reference count held on the memory for the integer will be zero. So that memory will be freed.
You can read up more on smart pointers here on StackOverflow, Wikipedia, Google, etc.
http://en.wikipedia.org/wiki/Smart_pointer
Comment for this answer: @lori For the literal code exactly as written, yes. But the nuances of the `unique_ptr` and `st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16move` etc. are rather deep for someone new to the C++ memory model, who might intend to pass it as a parameter or put the pointer into a standard container. Seemed a bit less cruel to start with `shared_ptr` in this case. :-)
Comment for this answer: In this instance I'd suggest a `unique_ptr` rather than `shared_ptr`, considering the reference counting overhead that comes with `shared_ptr`
Here is another answer: No, the memory will not be recovered until your program finishes executing.
No. If you are able to write over that memory then there was never a memory leak in the first place since you must still have some pointer to the allocated memory (allocating new memory with new or malloc will never give you the same already allocated memory).
No. The OS has no way to know that your program is not still using that memory, so no it cannot recover it.
Here is another answer: ```
For the duration of the program, will the memory every be recovered?
```
No: unless you magically guess the address of the lost variable and call ```delete```, it is gone.
```
Can the program write over this memory now, and reach a state where no memory has been lost?
```
No - again, once you've lost the reference to the address of the allocated memory region, you cannot recover the corresponding memory chunk.
```
Can the Operating System recover it while the program is still running?
```
No, the operating system does not recover that memory until the process exits.
Here is another answer: Yes, the memory is lost for the duration of the program. There is no way for the operating system to know that you have lost all references to this memory location.
This is one of the fundamental differences between C/C++ and Java/C#.
Garbage collection is the mechanism which is used to work out whether a memory location no longer has anything referencing it, and is what allows the OS to reclaim unused memory - and is not available in C/C++
Comment for this answer: Well, wouldn't you tell a C programmer *"C++ uses iostream instead of printf"*, even if the functions still exist? To most C++ zealots here, *"Modern C++"* equates with *"idiomatic C++"* and *"C++11"* equates with *"standard C++"*. In practical terms many people cannot (or simply have not) made that leap...but someone has to be pushing the cultural shift or it'll never happen. Bjarne wrote in one of his recent books *"Unless your name is Stroustrup, this is not your father's C++."*...and it seems to be his opinion that naked pointers are no longer a legitimate option for serious projects.
Comment for this answer: C++ uses smart pointers for fine grained deterministic garbage collection. Its optional to use so its not all the way over with Java/C# but its not on the same side as C either.
Comment for this answer: C++ that uses raw pointers (ie manual memory management) is what we refer to as "C with classes". Modern C++ uses smart pointers. So you could add another language to your little list. `C/"C with classes" -> C++ -> Java/C#`
Comment for this answer: I agree with most of what you've said there, but since C++ doesn't use smart pointers by default, I would reword that to `there is an **option** to use smart pointers in C++` rather than `C++ **uses** smart pointers`
Comment for this answer: @HostileFork - I'll buy that! :)
|
Title: Initializing a memmap on identical files produces different arrays
Tags: python;numpy
Question: I am translating some algorithms and am checking my work by comparing outputs. I have a .img file created by IDL and a .img file created by Python. The two outputs are identical bitwise, but when I open a memmap to continue processing, the Python file loses some data.
```>>> mm = np.memmap("X:/eordentl/Processing Algorithms/processed_data_py/
C5704B-00045Z-03_verytop_2017_08_17_03_28_29/
C5704B-00045Z-03_verytop_2017_08_17_03_28_29_SWIRcalib.img",
dtype="float32", mode="r", offset=0, shape=(333,320,285))
>>> mm[0]
memmap([[ 0. , 0. , 0. , ..., 0. ,
0. , 0. ],
[ 0. , 0. , 0. , ..., 0. ,
0. , 0. ],
[ 0. , 0. , 0. , ..., 0. ,
0. , 0. ],
...,
[ 0.3388325 , 0.30027774, 0.32364482, ..., 0.40399274,
0.42492244, 0.41838124],
[ 0.28071803, 0.30334 , 0.32874447, ..., 0.10009208,
0.10556249, 0.05749646],
[ 0.09928307, 0.1659135 , 0.36206895, ..., 0.00572116,
-0.00990769, 0.00214016]], dtype=float32)
>>> mm2=np.memmap("X:/eordentl/Processing Algorithms/processed_data_idl/
C5704B-00045Z-03_verytop_2017_08_17_03_28_29_SWIRcalib.img",
dtype="float32", mode="r", offset=0, shape=(333,320,285))
>>> mm2[0]
memmap([[ 0.01608727, 0.03300966, 0.04274924, ..., 0.07621645,
0.07274907, 0.07459512],
[ 0.06294538, 0.07551169, 0.07973923, ..., 0.42498964,
0.38354877, 0.396222 ],
[ 0.34490117, 0.3083234 , 0.27291125, ..., 0.32263884,
0.31246758, 0.3155154 ],
...,
[ 0.3388325 , 0.30027774, 0.32364482, ..., 0.40399274,
0.42492244, 0.41838124],
[ 0.28071803, 0.30334 , 0.32874447, ..., 0.10009208,
0.10556249, 0.05749646],
[ 0.09928307, 0.1659135 , 0.36206895, ..., 0.00572116,
-0.00990769, 0.00214016]], dtype=float32)
>>> import filecmp
>>> filecmp.cmp("X:/eordentl/Processing Algorithms/processed_data_py/C5704B-00045Z-03_verytop_2017_08_17_03_28_29/C5704B-00045Z-03_verytop_2017_08_17_03_28_29_SWIRcalib.img","X:/eordentl/Processing Algorithms/processed_data_idl/C5704B-00045Z-03_verytop_2017_08_17_03_28_29_SWIRcalib.img")
True
```
I am incredibly confused. These are not the only differences, see
```>>> np.count_nonzero(mm2-mm)
109685
```
though the number is more likely due to floating point error.
Given that the files are bit by bit identical (verified through FC /b on windows cmd), what could be causing the difference, and how can I fix it?
Comment: For the first comment, the Python-generated one always has the zeros regardless of order of initialization (tested in new session). Am working on the second one now.
Comment: And yes, I did close the python-created file. (Phew!)
Comment: Haha, true. Just finished checking the with `mmap.mmap`, with the same results (python-created has 0's, IDL-created does not).
Comment: I think at this point I should say how I created the python file. I used `output = open(outfile, "wb")`, then wrote into the file with `output.write(arr.tobytes())`. I don't think there should be anything wrong with that (considering that the content of the two files appears to be identical).
Comment: Figured it out (I think). I was using the spectral library to interface with ENVI files. The spectral files handles memmaps to images pretty poorly, and I ended up with multiple maps open to a [email protected]. I rewrote the code to access the file using only one memmap, and that worked.
Comment: What happens if you map the files in the opposite order? Does the second one get the non-zero files no matter which one is mapped first, or does the Python-generated one get the non-zero files no matter which one is mapped first?
Comment: Also, what happens if you use the stdlib `mmap.mmap` to map the two files? Are the mmaps equal then? What if you then construct arrays off those mmaps using an explicit `buffer` (as described in the `numpy.memmap` docs)?
Comment: And one last thing: You did close or flush the Python-created file before memmapping it, right?
Comment: If I were you, I would have said damn rather than phew there. Sure, a silly mistake makes you feel dumb for a couple minutes, but it has the advantage of being easy for someone else to spot, and trivial for you to fix; if you didn't do anything silly, it's probably going to be a lot more of a pain to debug… But anyway, now you've ruled it out.
Comment: Well, now we know the problem isn't in numpy. And IIRC (although I'd have to check the code), numpy doesn't even use Python's mmap; it calls the POSIX or Windows functions directly, so if they're both getting the same results, it's probably not either of them doing anything weird. Which means… I'm stumped. Hopefully someone else can help.
|
Title: Post request with ajax for Firebase Cloud Messaging
Tags: javascript;html;ajax;firebase-cloud-messaging
Question: I have writtent this script to send notification from my HTML page.
When i try to send the request, i get both messages ```"Success"``` and ```"Fail"``` and the notification is not sent. I call the function ```get_data_for_notification()``` on button click.
```<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script type="text/javascript">
function get_data_for_notification(){
var title = document.getElementById('news_title').value;
var subtitle = document.getElementById('news_small_description').value;
$.ajax({
type : 'POST',
url : "https://fcm.googleapis.com/fcm/send",
headers : {
Authorization : 'key=mykey'
},
contentType : 'application/json',
dataType: 'json',
data: JSON.stringify({"to": "/topics/android", "priority" : "high", "notification": {"title":title,"body":subtitle}}),
success : alert("Success") ,
error : alert("Fail")
}) ;
}
</script>
```
Here is the accepted answer: The problem was on the button type, i needed to use ```type="button"``` and not ```type="submit"```.
Here is another answer: Your script executes, before sending the request, ```alert("Success")``` and ```alert("Fail")``` and assigns their return value as the event handlers.
You have to wrap them in an anonymous function:
``` $.ajax({
type: 'POST',
url: "https://fcm.googleapis.com/fcm/send",
headers: {
Authorization: 'key=mykey'
},
contentType: 'application/json',
dataType: 'json',
data: JSON.stringify({
"to": "/topics/android",
"priority": "high",
"notification": {
"title": title,
"body": subtitle
}
}),
success: function(responseData) {
alert("Success");
}
error: function(jqXhr, textStatus, errorThrown) {
/*alert("Fail");*/ // alerting "Fail" isn't useful in case of an error...
alert("Status: " + textStatus + "\nError: " + errorThrown);
}
});
```
Comment for this answer: Check the request and response in the network tab of your developer tools, check the documentation on how to make a valid request, ...
Comment for this answer: Sorry it was a typo... but now i only get an alert `Status: Error, Error`.
|
Title: how do I get my altair plot from views to render on my html page in Django?
Tags: python;django;altair;vega
Question: I am trying to get my Altair plot to render from my views to my html page. I have tried everything from stack overflow to get this to work, but every time I try I don't get my plot. This is my code in the different sections.
Views.py:
```def search_site(request):
if request.method == "POST":
inputSite = request.POST.get('Site')
SITES = Ericsson_LTE_1D_Version_Sectir_ID_HOUR.objects.filter(SITE__contains = inputSite).values()
data = pd.DataFrame(SITES)
chart = alt.Chart(data).mark_line().encode(
y ='Avg_Nr_Of_RRC_Connected_Users:Q',
x ='PERIOD_START_TIME:T',
).to_json(indent=None)
return render(request,'VenueReporting/searchSite.html', {'site':inputSite,'Predictions':SITES,'chart':chart})
else:
return render(request, 'VenueReporting/searchSite.html',{})
```
HTML Page:
```<head>
<script src="https://cdn.jsdelivr.net/npm/vega@[VERSION]"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-lite@[VERSION]"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-embed@[VERSION]"></script>
</head>
<body>
<script type="text/javascript">
var chart = "{{chart}}";
vegaEmbed('#vis1', chart).then(result => console.log(result))
.catch(console.warn);
</script>
{% if Predictions %}
<h2>You Searched for:</h2>
<h3>{{site}}</h3>
<h2>The activity was:</h2>
<div id="vis1"></div>
{% else %}
<h1>That Venue does not exist or you put the wrong information</h1>
{% endif %}
</body>
```
Here is the accepted answer: The ```chart``` variable in the javascript should not be a string, it should be a JSON object. Also, django's default behavior of escaping special charcters will result in invalid JSON. To fix this, you can use something like this (notice there are no quotes):
```var chart = {{ chart|safe }};
```
Also, in your ```<script>``` tags, ```[VERSION]``` is not meant to be used literally. You should specify the versions of the libraries compatible with Altair, which you can find in ```alt.VEGA_VERSION```, ```alt.VEGAEMBED_VERSION```, and ```alt.VEGALITE_VERSION```:
``` <script src="https://cdn.jsdelivr.net/npm/vega@{{ alt.VEGA_VERSION }}"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-lite@{{ alt.VEGALITE_VERSION }}"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-embed@{{ alt.VEGAEMBED_VERSION }}"></script>
```
(assuming you pass ```alt``` to your template).
Comment for this answer: It's still not working. How do I pass alt to my template? Also for the JavaScript it says, "property assignment expected" as its highlighted in red.
Comment for this answer: okay i fixed the "Property assignment error" but I still am not getting a plot. I also still dont know how to pass alt to my template
Comment for this answer: okay I figured it out the problem was my javascript needed to be after my div call
|
Title: Update CSV to a mysqldb Ignoring the IDs which has no entries
Tags: mysql;sql;python-2.7;phpmyadmin
Question: I'm new to work on mysqlDB.
I have a table that looks like
```id,A,B,C,D,E,F,G,H
```
Now,I have ```CSV``` file with approximately 11000 entries which look like
```1,
2,Q,R,X,Y,Z
3,X,Y,Z
4,Q,R
100,R,X
5000,X,Q
200,
7000,
10000,
.....
```
My question is,How do i import the ```CSV``` to update the data only to the table column ```H``` where by ignoring the ```ID``` which has no entries present ? Any advice on this would be really great.
Expected Query Result for the Column ```H```
```ID H
1
2 Q,R,X,Y,Z
3 X,Y,Z
4 Q,R
100 R,X
5000 X,Q
200
7000
10000
.....
```
Comment: @Vatev Yes,Buddy ! i have only two columns in CSV for Example A&B where A has ID and B might have multiple values in a cell !
Comment: How many columns does the csv have? Do they all go in the H column in the table?
Comment: Well there aren't any quotes or escapes there, any csv parser will interpret those as separate cells.
|
Title: ...Java Screen Resolution Changing
Tags: java;swing
Question: We have developed a huge application using Java Swings, this is well exceuting and running on all systems, but the problem is the resolution , if the resolution is 1260/768 it works well means all the components including the scrollbar will be visible, even application will fit to the width and height of the screen, but when its below 1280/768 its not fitting the screen, what i do is manually change the system resolution to 1280/768 and also wrote program which will change the resolution, but the problem is most systems does not support more than 1024/768,on old systems its max VGA Cards-1024/768.
What is the way to resolve this?Which layout manager to change?
Update
Our application will be going live in next 5 days, so need something much quicker, tried with FlowLayout but it will not be good UI.
Or how to resize components when maximized or minimized? how is it implememted?
Comment: Generally speaking, if you design the UI components in a way that resizing the app window lets the components grow and shrink in a useful way, you will have no such fundamental problems with different resolutions. This should be part of the UI design patterns. It is very implementation-specific to decide which layout manager to change, or what component properties to update, so I doubt there is a satisfying general answer. If you post samples, we probably can suggest ideas for what to modify, and how.
Comment: _going live in next 5 days_ so you have to learn all about Layout/Managers **quickly** :-) You need a manager that's easy to learn/use and at the same time powerful enough to fit your needs, my current preference is MigLayout. Alternatively, hire a consultant which does the work for you.
Comment: @jwenting if it's really only the layout, it might work for an expert (though a challenge :-) - if the rest of code is as badly designed as the ui (which is not improbable, I suspect), there's a _real_ problem.
Comment: if the thing is as large as he says, and as poorly designed as is apparent from it not scaling at all, 5 days won't be enough to either learn about layout managers himself or to hire experts, get them up to speed, and recode the entire user interface... IOW he's fchk'd.
Comment: @kleopatra I suspect they're using NullLayout of similar, people using that typically would not use a nicely decoupled design, thus have all the business logic integrated into the user interface code. Effectively means a complete rewrite of the system in 5 days. Any expert worth his fee will refuse such an assignment :)
Comment: ok let me put some part of my code
Comment: Code is huge and complicated, but i can explain things, Its TabbedPanel and each tabbed panel is set with GroupLayout and JScrollpane, so sometimes its moving out of my screen/system:)
Comment: Can you try [FlowLayout](http://docs.oracle.com/javase/6/docs/api/java/awt/FlowLayout.html) ?
Here is another answer: The smaller resolution could use a smaller and especially a more narrow font. It is a huge task to substitute hard coordinates with scaled ones; something like ```Scale.x(80)```. But it is a "dumb" dependable solution. If you still can use a smaller font (Arial Narrow?).
Mind, smaller resolution is often displayed on the same physical size monitor. Or with today's tablets tininess is acceptable.
Here is another answer: The answer basically depends on how your GUI is designed.
In some cases, a ```FlowLayout``` will allow components to wrap around.
```JScrollPane``` wrappers can be added around sections to make them independently scrollable. Along this line of thought, the entire current GUI could be placed in a ```JScrollPane``` and set never to be less than 1280x768 such that scrollbars will appear on smaller displays.
```JTabbedPanel``` could also be used to stack sections of the GUI which are not commonly used in unison.
Comment for this answer: For big application Flow layuout will arrange components and move them down..which will abviously is bad
Comment for this answer: Yeah, it wasn't the top recommendation. JScrollPane is almost certainly what you want. A few books on UIX couldn't hurt either.
|
Title: Cant Install KeystoneJS on AWS EC2 - Errors Out
Tags: node.js;mongodb;amazon-web-services;amazon-ec2;keystonejs
Question: I have an EC2 instance that successfully has node and mongo installed on it (I've tested both). I'm trying to install KeystoneJS now, but it's throwing errors. Not really sure where I'm going wrong here. Everything works fine locally, I'm assuming it's something with how my EC2 is configured.
```npm install -g generator-keystone```
results in
```npm ERR! tar.unpack untar error /home/ec2-user/.npm/generator-keystone/0.3.7/package.tgz
npm ERR! Linux 3.14.35-28.38.amzn1.x86_64
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "-g" "generator-keystone"
npm ERR! node v0.12.7
npm ERR! npm v2.11.3
npm ERR! path /usr/local/lib/node_modules/generator-keystone
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! Error: EACCES, mkdir '/usr/local/lib/node_modules/generator-keystone'
npm ERR! at Error (native)
npm ERR! { [Error: EACCES, mkdir '/usr/local/lib/node_modules/generator-keystone']
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! path: '/usr/local/lib/node_modules/generator-keystone',
npm ERR! fstream_type: 'Directory',
npm ERR! fstream_path: '/usr/local/lib/node_modules/generator-keystone',
npm ERR! fstream_class: 'DirWriter',
npm ERR! fstream_stack:
npm ERR! [ '/usr/local/lib/node_modules/npm/node_modules/fstream/lib/dir-writer.js:35:25',
npm ERR! '/usr/local/lib/node_modules/npm/node_modules/mkdirp/index.js:47:53',
npm ERR! 'FSReqWrap.oncomplete (fs.js:95:15)' ] }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.
npm ERR! Please include the following file with any support request:
npm ERR! /home/ec2-user/npm-debug.log
```
```sudo npm install -g generator-keystone```
results in
```sudo: npm: command not found```
Here is another answer: If a command isn't available when you use sudo in Linux its normally because the command hasn't been added to the 'sudo path'. You can see another question which solves that issue here:
https://askubuntu.com/questions/611528/why-cant-sudo-find-a-command-after-i-added-it-to-path
I'm assuming you either used Ubuntu or the Amazon Linux image for you EC2 instance ( You should specify in the question in future as errors like this can be solved different ways depending on OS - especially when it comes to PATH variables which is likely what the issue is here).
The above answer I gave is how to do it on Ubuntu whereas I don't know if that convenience command (sudo visudo) is available in the amazon image.
Also as a side note I'd highly recommend that you look into how to use NVM as a general practice when writing anything which depends on Node.js. It allows you to have multiple versions of node installed on one machine and adjusts your path accordingly to allow you to switch between which one you are using in each terminal (bash) instance.
(It also makes deployment easier IMO).
https://github.com/creationix/nvm
Good luck!
|
Title: cancelTouchesInView equivalent for NSGestureRecognizer
Tags: macos;gesture
Question: UIGestureRecognizer has ```cancelTouchesInView``` flag, so if this flag is set to false, the view or its subview will still get tap events.
Is there any equivalent for NSGestureRecognizer (in particular NSClickGestureRecognizer)?
Comment: How about subclass and implement [`canPrevent`](https://developer.apple.com/documentation/appkit/nsgesturerecognizer/1534503-canprevent) and then return `true`?
Comment: Or implement delegate method [`gestureRecognizer(_:shouldRecognizeSimultaneouslyWith:)`](https://developer.apple.com/documentation/appkit/nsgesturerecognizerdelegate/1529773-gesturerecognizer)?
|
Title: How to Add an Attachment to a User Story using Rally REST .NET
Tags: c#;.net;rest;rally
Question: We're in the process of porting our .NET Rally code from SOAP to the REST .NET API. So far so good, the REST API seems to be faster and is easier to use since there's no WSDL to break each time the work product custom fields change in the Rally Workspace.
I'm having trouble with one thing though as we try to replicate the ability to upload attachments. We're following a very similar procedure as to the one outlined in this posting:
Rally SOAP API - How do I add an attachment to a Hierarchical Requirement
Whereby the image is read into a System.Drawing.Image. We use the ImageToByteArray function to convert the image to a byte array which then gets assigned to the AttachmentContent, which is created first.
Then, the Attachment gets created, and wired up to both AttachmentContent, and the HierarchicalRequirement.
All of the creation events work great. The content object gets created fine. Then the new attachment called "Image.png" gets created and linked to the Story. But when I download the resulting attachment from Rally, Image.png has zero bytes! I've tried this with different images, JPEG's, PNG's, etc. all with the same results.
An excerpt of the code showing our process is included below. Is there something obvious that I'm missing? Thanks in advance.
``` // .... Read content into a System.Drawing.Image called imageObject ....
// Convert Image to byte array
byte[] imageBytes = ImageToByteArray(imageObject, System.Drawing.Imaging.ImageFormat.Png);
var imageLength = imageBytes.Length;
// AttachmentContent
DynamicJsonObject attachmentContent = new DynamicJsonObject();
attachmentContent["Content"] = imageBytes ;
CreateResult cr = restApi.Create("AttachmentContent", myAttachmentContent);
String contentRef = cr.Reference;
Console.WriteLine("Created: " + contentRef);
// Set up attachment
DynamicJsonObject newAttachment = new DynamicJsonObject();
newAttachment["Artifact"] = story;
newAttachment["Content"] = attachmentContent;
newAttachment["Name"] = "Image.png";
newAttachment["ContentType"] = "image/png";
newAttachment["Size"] = imageLength;
newAttachment["User"] = user;
// Create the attachment in Rally
cr = restApi.Create("Attachment", newAttachment);
String attachRef = cr.Reference;
Console.WriteLine("Created: " + attachRef);
}
public static byte[] ImageToByteArray(Image image, System.Drawing.Imaging.ImageFormat format)
{
using (MemoryStream ms = new MemoryStream())
{
image.Save(ms, format);
// Convert Image to byte[]
byte[] imageBytes = ms.ToArray();
return imageBytes;
}
}
```
Here is the accepted answer: This issue also had me puzzled for a while - finally sorted it out about a week ago.
Two observations:
While Rally's SOAP API will serialize the byte array into a Base64 string behind the scenes, REST doesn't do this step for you and will expect a Base64-formatted String to be passed as the Content attribute for the AttachmentContent object.
System.Drawing.Image.Length as shown in your example won't provide the correct length that Rally's WSAPI is expecting. You need to pass the length of the Base64-formatted string after being converted back to a regular String. This is also the same as the length of the byte array.
I'm including a code sample to illustrate:
```// System Libraries
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing.Imaging;
using System.Drawing;
using System.IO;
using System.Web;
// Rally REST API Libraries
using Rally.RestApi;
using Rally.RestApi.Response;
namespace RestExample_CreateAttachment
{
class Program
{
static void Main(string[] args)
{
// Set user parameters
String userName = "[email protected]";
String userPassword = "password";
// Set Rally parameters
String rallyURL = "https://rally1.rallydev.com";
String rallyWSAPIVersion = "1.36";
//Initialize the REST API
RallyRestApi restApi;
restApi = new RallyRestApi(userName,
userPassword,
rallyURL,
rallyWSAPIVersion);
// Create Request for User
Request userRequest = new Request("user");
userRequest.Fetch = new List<string>()
{
"UserName",
"Subscription",
"DisplayName",
};
// Add a Query to the Request
userRequest.Query = new Query("UserName",Query.Operator.Equals,userName);
// Query Rally
QueryResult queryUserResults = restApi.Query(userRequest);
// Grab resulting User object and Ref
DynamicJsonObject myUser = new DynamicJsonObject();
myUser = queryUserResults.Results.First();
String myUserRef = myUser["_ref"];
//Set our Workspace and Project scopings
String workspaceRef = "/workspace/905-442-2228";
String projectRef = "/project/905-442-2228";
bool projectScopingUp = false;
bool projectScopingDown = true;
// Find User Story that we want to add attachment to
// Tee up Story Request
Request storyRequest = new Request("hierarchicalrequirement");
storyRequest.Workspace = workspaceRef;
storyRequest.Project = projectRef;
storyRequest.ProjectScopeDown = projectScopingDown;
storyRequest.ProjectScopeUp = projectScopingUp;
// Fields to Fetch
storyRequest.Fetch = new List<string>()
{
"Name",
"FormattedID"
};
// Add a query
storyRequest.Query = new Query("FormattedID", Query.Operator.Equals, "US43");
// Query Rally for the Story
QueryResult queryResult = restApi.Query(storyRequest);
// Pull reference off of Story fetch
var storyObject = queryResult.Results.First();
String storyReference = storyObject["_ref"];
// Read In Image Content
String imageFilePath = "C:\\Users\\username\\";
String imageFileName = "image1.png";
String fullImageFile = imageFilePath + imageFileName;
Image myImage = Image.FromFile(fullImageFile);
// Get length from Image.Length attribute - don't use this in REST though
// REST expects the length of the image in number of bytes as converted to a byte array
var imageFileLength = new FileInfo(fullImageFile).Length;
Console.WriteLine("Image File Length from System.Drawing.Image: " + imageFileLength);
// Convert Image to Base64 format
string imageBase64String = ImageToBase64(myImage, System.Drawing.Imaging.ImageFormat.Png);
// Length calculated from Base64String converted back
var imageNumberBytes = Convert.FromBase64String(imageBase64String).Length;
// This differs from just the Length of the Base 64 String!
Console.WriteLine("Image File Length from Convert.FromBase64String: " + imageNumberBytes);
// DynamicJSONObject for AttachmentContent
DynamicJsonObject myAttachmentContent = new DynamicJsonObject();
myAttachmentContent["Content"] = imageBase64String;
try
{
CreateResult myAttachmentContentCreateResult = restApi.Create("AttachmentContent", myAttachmentContent);
String myAttachmentContentRef = myAttachmentContentCreateResult.Reference;
Console.WriteLine("Created: " + myAttachmentContentRef);
// DynamicJSONObject for Attachment Container
DynamicJsonObject myAttachment = new DynamicJsonObject();
myAttachment["Artifact"] = storyReference;
myAttachment["Content"] = myAttachmentContentRef;
myAttachment["Name"] = "AttachmentFromREST.png";
myAttachment["Description"] = "Attachment Desc";
myAttachment["ContentType"] = "image/png";
myAttachment["Size"] = imageNumberBytes;
myAttachment["User"] = myUserRef;
CreateResult myAttachmentCreateResult = restApi.Create("Attachment", myAttachment);
List<string> createErrors = myAttachmentContentCreateResult.Errors;
for (int i = 0; i < createErrors.Count; i++)
{
Console.WriteLine(createErrors[i]);
}
String myAttachmentRef = myAttachmentCreateResult.Reference;
Console.WriteLine("Created: " + myAttachmentRef);
}
catch (Exception e)
{
Console.WriteLine("Unhandled exception occurred: " + e.StackTrace);
Console.WriteLine(e.Message);
}
}
// Converts image to Base 64 Encoded string
public static string ImageToBase64(Image image, System.Drawing.Imaging.ImageFormat format)
{
using (MemoryStream ms = new MemoryStream())
{
image.Save(ms, format);
// Convert Image to byte[]
byte[] imageBytes = ms.ToArray();
// Convert byte[] to Base64 String
string base64String = Convert.ToBase64String(imageBytes);
return base64String;
}
}
}
}
```
Comment for this answer: Thank you! This has been bugging me!
|
Title: How to check if my contact list members are registered with my app just like whatsapp does?
Tags: ios;swift;chat
Question: I am creating a chat app, for which I need to check each of my contact list members if they are registered with my app and show the registered members list in my app.
Currently I am calling an API to check every number one by one. In case of 800 contacts it is getting called 800 times. I know this is not best way to do it. So, can some one please help me out and suggest me to do it in better way?
Below is my code:
``` func createContactNumbersArray() {
for i in 0...self.objects.count-1 {
let contact:CNContact! = self.objects[i]
if contact.phoneNumbers.count > 0 {
for j in 0...contact.phoneNumbers.count-1 {
print((contact.phoneNumbers[j].value).value(forKey: "digits")!)
let tappedContactMobileNumber = (contact.phoneNumbers[j].value).value(forKey: "digits")
let phoneNo = self.separateISDCodeMobileNo(mobileNo:tappedContactMobileNumber as! String)
contactNumbersArray.append(phoneNo.1)
}
}
}
callGetAppUserDetailService(mobileNumber: self.contactNumbersArray[currentIndex] as! String)
}
```
I am doing this whole process in the background and refreshing the member list on front in current scenario.
I want to make the whole process as fast as Whatsapp.
Comment: Currently its one at a time.
Comment: Yeah. I think the same. Will change it. Thanks for the suggestion.
Comment: I have my own backend but there is another backend developer. Well I followed @kathayatnk suggestion. And for now it is more helpful. Thanks all for your suggestions and answers.
Comment: Does your API expect one number at time or whole number array?
Comment: That seems bad. If you have access to the API, you really should change that to accept Array of phone numbers and return only those numbers that are registered. This will reduce the call to API
Comment: Right. Without a proper API there is no way to do it. Do you build your own back-end or use some public ones like Firebase, Twillio, ConnectyCube?
Here is the accepted answer: There is no way to do it without back-end modifications. So you need to work closely with your back-end engineer and build an API for this.
Here is an advise how you can build this API:
To have 2 APIs:
1) Upload the whole user address book to the back-end in a single request, something like:
```let addressBook = NSMutableOrderedSet()
let contact1 = AddressBookContact()
contact1.name = "Jony Ive"
contact1.phone = "281.895.2518"
addressBook.add(contact1)
let contact2 = AddressBookContact()
contact2.name = "Steve Why"
contact2.phone = "412739123123"
addressBook.add(contact2)
let deviceUDID = nil
Request.uploadAddressBook(withUdid: deviceUDID, addressBook: addressBook, force: false, successBlock: { (updates) in
}) { (error) in
}
```
2) As a next step - retrieve a list of already registered users with phones from your address book, something like this:
```let deviceUDID = nil
Request.registeredUsersFromAddressBook(withUdid: nil, successBlock: { (users) in
}) { (error) in
}
```
so now you can show it in UI
Found this example here https://developers.connectycube.com/ios/address-book
Comment for this answer: Yes, actually I have done the same. Thanks for the help. :)
Here is another answer: I think you need to save user's phone number to your database when user registers your app. Then you can find out contacts which have already used your chat app.
Comment for this answer: Thanks for the answer Dogan. Yes. I am doing this already. I am confused about calling the API for so many times like if I have 200 contacts then I have to call the API 200 times. I don't know if I am doing right or not. Is there any better way to achieve this?
Comment for this answer: Yes, I can. But if the app has 1000 or more users, then we'll get that big list and then I'll have to check it from that list.
Comment for this answer: Can you make a service call that returns only the contacts that are changed by the status of app usage? Then you need to consider updating just these contacts rather than checking all 200 contacts.
Here is another answer: In fact,you can do little on client.
The normal way is: Server provide an interface that support check a phonenumber array, and return the phonenumbers has registered on server.
|
Title: Should I Treat Entity Framework as an Unmanaged Resource?
Tags: c#;entity-framework;destructor;idisposable
Question: I am working with a class that uses a reference to EF in its constructor.
I have implemented ```IDisposable```, but I'm not sure if I need a destructor because I'm not certain I can classify EF as an unmanaged resource.
If EF is a managed resource, then I don't need a destructor, so I think this is a proper example:
```public ExampleClass : IDisposable
{
public ExampleClass(string connectionStringName, ILogger log)
{
//...
Db = new Entities(connectionStringName);
}
private bool _isDisposed;
public void Dispose()
{
if (_isDisposed) return;
Db.Dispose();
_isDisposed= true;
}
}
```
If EF is unmanaged, then I'll go with this:
```public ExampleClass : IDisposable
{
public ExampleClass(string connectionStringName, ILogger log)
{
//...
Db = new Entities(connectionStringName);
}
public void Dispose()
{
Dispose(true);
}
~ExampleClass()
{
Dispose(false);
}
private bool _isDisposed;
protected virtual void Dispose(bool disposing)
{
if (_isDisposed) return;
// Dispose of managed resources
if (disposing)
{
// Dispose of managed resources; assumption here that EF is unmanaged.
}
// Dispose of unmanaged resources
Db.Dispose();
_isDisposed = true;
//freed, so no destructor necessary.
GC.SuppressFinalize(this);
}
}
```
Which one is it?
Comment: I'd consider it managed, and expect it to implement it's own finalisers for it's unmanaged parts.
Comment: Is the source code enough? https://entityframework.codeplex.com/SourceControl/latest#src/EntityFramework/DbContext.cs
Comment: I would always control the `DbContext` creation and destruction. It needs to be created for a small unit of work. Its internals follow the unit of work pattern
Comment: IDisposable does not solely deal with unmanaged resources.
Comment: Can you then explain the differences between the code? Can you explain what exactly you're asking? Is your question _"Does a DbContext have a finalizer that disposes itself"_?
Comment: It's not clear _to me_ what _you know_ about IDisposable, descructors/finalizers and (un)managed resources, hence your question is not clear _to me_ as you seem to make some shortcuts in your explanation. It's not easy to grasp someone else's train of thought. Does [How does Dispose work with Entity Framework](http://stackoverflow.com/questions/12926483/how-does-dispose-work-with-entity-framework) answer your question?
Comment: I know plenty about IDisposable and (un)managed resources, but if you want to wait for someone who can decipher your terse language, be my guest. :-)
Comment: The problem is there's no clear-cut "yes" or "no" answer to "Is DbContext unmanaged?". It's a CLR class, so it's definitely a managed object. Yet in the background it uses database connections from a pool, which tend to be (managed wrapppers for) unmanaged resources. However, the DbContext and related, internal classes by default release those unmanaged resources themselves. When you don't open connections yourself, just disposing a DbContext is enough (and often not even necessary). Nothing else to be done.
Comment: @CodeCaster I did not make that statement; neither do my examples indicate it.
Comment: @JamesBarrass - thank you. but can you point to something to prove that to me? I'd like to rely on this going forward, if that's possible.
Comment: The second example has a destructor. If I'm dealing with only managed resources, then I don't need a destructor. Hence the title of the question. Is that more clear?
Comment: @CodeCaster It's not clear to me that you know about IDisposable based on your questions. If you cannot answer the question, please move on.
Comment: @CodeCaster No hard feelings. I would appreciate if you could point specifically to the points that are not clear, rather than stating that you don't know if I know, that I am making shortcuts, that I am terse.
Comment: @JamesBarrass - I'm not certain I'd find the answer in the source code. But I do appreciate your thoughts. I'll wait and see if someone answers it; in the meantime, I'll treat it as managed because that feels right; it feels like within EF it would deal with the unmanaged parts so I don't have to.
Comment: @CodeCaster - thank you. `However, the DbContext and related, internal classes by default release those unmanaged resources themselves.` That seems to make sense, and makes me want to treat EF as a managed resource. As I understand, I do not need a destructor for managed objects, so it appears I can use the first example with confidence until told otherwise.
Here is the accepted answer: You would never want to use a finalizer (destructor) in this case.
Whether ```DbContext``` contains unmanaged resources or not, and even whether it responsibly frees those unmanaged resources or not, is not relevant to whether you can try to invoke ```DbContext.Dispose()``` from a finalizer.
The fact is that, any time you have a managed object (which an instance of ```DbContext``` is), it is never safe to attempt to invoke any method on that instance. The reason is that, by the time the finalizer is invoked, the ```DbContext``` object may have already been GC-collected and no longer exist. If that were to happen, you would get a ```NullReferenceException``` when attempting to call ```Db.Dispose()```. Or, if you're lucky, and ```Db``` is still "alive", the exception can also be thrown from within the ```DbContext.Dispose()``` method if it has dependencies on other objects that have since been finalized and collected.
As this "Dispose Pattern" MSDN article says:
```
X DO NOT access any finalizable objects in the finalizer code path, because there is significant risk that they will have already been finalized.
For example, a finalizable object A that has a reference to another finalizable object B cannot reliably use B in A’s finalizer, or vice versa. Finalizers are called in a random order (short of a weak ordering guarantee for critical finalization).
```
Also, note the following from Eric Lippert's When everything you know is wrong, part two:
```
Myth: Finalizers run in a predictable order
Suppose we have a tree of objects, all finalizable, and all on the finalizer queue. There is no requirement whatsoever that the tree be finalized from the root to the leaves, from the leaves to the root, or any other order.
Myth: An object being finalized can safely access another object.
This myth follows directly from the previous. If you have a tree of objects and you are finalizing the root, then the children are still alive — because the root is alive, because it is on the finalization queue, and so the children have a living reference — but the children may have already been finalized, and are in no particularly good state to have their methods or data accessed.
```
Something else to consider: what are you trying to dispose? Is your concern making sure that database connections are closed in a timely fashion? If so, then you'll be interested in what the EF documentation has to say about this:
```
By default, the context manages connections to the database. The context opens and closes connections as needed. For example, the context opens a connection to execute a query, and then closes the connection when all the result sets have been processed.
```
What this means is that, by default, connections don't need ```DbContext.Dispose()``` to be called to be closed in a timely fashion. They are opened and closed (from a connection pool) as queries are executed. So, though it's still a very good idea to make sure you always call ```DbContext.Dispose()``` explicitly, it's useful to know that, if you don't do it or forget for some reason, by default, this is not causing some kind of connection leak.
And finally, one last thing you may want to keep in mind, is that with the code you posted that doesn't have the finalizer, because you instantiate the ```DbContext``` inside the constructor of another class, it is actually possible that the ```DbContext.Dispose()``` method won't always get called. It's good to be aware of this special case so you are not caught with your pants down.
For instance, suppose I adjust your code ever so slightly to allow for an exception to be thrown after the line in the constructor that instantiates the ```DbContext```:
```public ExampleClass : IDisposable
{
public ExampleClass(string connectionStringName, ILogger log)
{
//...
Db = new Entities(connectionStringName);
// let's pretend I have some code that can throw an exception here.
throw new Exception("something went wrong AFTER constructing Db");
}
private bool _isDisposed;
public void Dispose()
{
if (_isDisposed) return;
Db.Dispose();
_isDisposed= true;
}
}
```
And let's say your class is used like this:
```using (var example = new ExampleClass("connString", log))
{
// ...
}
```
Even though this appears to be a perfectly safe and clean design, because an exception is thrown inside the constructor of ```ExampleClass``` after a new instance of ```DbContext``` has already been created, ```ExampleClass.Dispose()``` is never invoked, and by extension, ```DbContext.Dispose()``` is never invoked either on the newly created instance.
You can read more about this unfortunate situation here.
To ensure that the ```DbContext```'s ```Dispose()``` method is always invoked, no matter what happens inside the ```ExampleClass``` constructor, you would have to modify the ```ExampleClass``` class to something like this:
```public ExampleClass : IDisposable
{
public ExampleClass(string connectionStringName, ILogger log)
{
bool ok = false;
try
{
//...
Db = new Entities(connectionStringName);
// let's pretend I have some code that can throw an exception here.
throw new Exception("something went wrong AFTER constructing Db");
ok = true;
}
finally
{
if (!ok)
{
if (Db != null)
{
Db.Dispose();
}
}
}
}
private bool _isDisposed;
public void Dispose()
{
if (_isDisposed) return;
Db.Dispose();
_isDisposed= true;
}
}
```
But the above is really only a concern if the constructor is doing more than just creating an instance of a ```DbContext```.
Comment for this answer: great response; thank you. however, perhaps we can flesh out a few points via comment: first, can you explain why EF is a managed object? Naively, I think of EF as a wrapper around a db connection (of course more, but that much in this context). I assume it is managed the same way I assume a class that wraps a file stream is a managed object (it's written in C# and therefore under the control of the CLR). The stream itself is opened by, perhaps Win32 dlls, I don't know, but it is not managed by the CLR. You answered my question, but you didn't tell me why EF is an unmanaged object; am I right?
Comment for this answer: second: you said `They are opened and closed (from a connection pool) as queries are executed. So, though it's still a very good idea to make sure you always call DbContext.Dispose() explicitly, it's useful to know that, if you don't do it or forget for some reason, by default, this is not causing some kind of connection leak.` and that alleviates my concern. I felt I needed to implement IDisposable on my wrapper classes, at first chiefly because I worried EF could be unmanaged. But I know that IDisposable is properly used to free both kinds of resources. So this part is appreciated as well
Comment for this answer: Finally, thank you for the bonus consideration, re: the exception in the constructor. That is a point I had not considered, and a subtlety about the pattern I had not appreciated.
Comment for this answer: great. and I understood not using a destructor for managed resources, that is why I didn't do it in my examples. But I appreciate the reminder for everyone else.
Comment for this answer: About your initial comment, it sounds like you understood the point correctly. All .NET objects are managed and subject to garbage collection, this includes `DbContext`. In contrast, an unmanaged resource is usually something like a file handle that you need to close using low level winapi functions. You should only ever define finalizers to clear unmanaged resources, which is basically almost never.
Here is another answer: C# provides garbage collection and thus does not need an explicit destructor. If you do control an unmanaged resource, however, you will need to explicitly free that resource when you are done with it. Implicit control over this resource is provided with a Finalize( ) method (called a finalizer), which will be called by the garbage collector when your object is destroyed.
https://www.oreilly.com/library/view/programming-c/0596001177/ch04s04.html
|
Title: Run multiple script in ubuntu server
Tags: python;bash;shell
Question: i have 7 ```py``` files which has to run continuously. These ```py``` scripts are in different locations. I am running these files like ```watch -n 2 ./myscript.py```. Is there a way where i can create one bash file or something similar which runs all my 7 ```py``` script parallel? So i can start one script
Comment: python script1.py &
python script2.py & (basically we send it to background)
Here is another answer: ```#!/bin/bash
for folder_name in Fol1 Fol2 F3 F4 floderX
do
cd $folder_name
# run any command in the folder
cd .. #back to main folder
done
```
Here is another answer: I would have just put a comment to Aditya answer if I had enough reputation but if you want to run in parallel and still use watch, you can also do (assuming you have a shebang in your scripts giving the right python interpreter):
```watch 'pathto/script1.py & pathto/script2.py & pathto/script3.py &'
```
(maybe you can also think of adding those scripts to your $PYTHONPATH to avoid giving the path to the scripts)
Here is another answer: The simplest solution to run multiple Python files in // is to tell each process to go into the background with the & shell operator.
``` python script1.py &
python script2.py &
python script3.py &
.......
```
|
Title: Exclude nodes based on other relationship
Tags: neo4j;cypher
Question: I am trying to get a result of Users and Accounts that are simultaneously ```:USER``` and ```:ADMIN``` of Accounts that does not have the ```:PARENT``` relation to or from any other account.
Background:
This gives me all accounts that are neither parent accounts nor child accounts:
```MATCH (a: Account)
WHERE NOT (a)-[:PARENT]-()
RETURN a
```
This gives me all accounts where all Users also have the ```:ADMIN``` relation to an Account:
```MATCH (u)-[:ADMIN]-(a)-[:USER]-(u)
RETURN a, u
```
Problem:
When trying to combine the two, I still get Accounts with ```:PARENT``` relations in my results:
```MATCH (u)-[:ADMIN]-(a)-[:USER]-(u)
WHERE NOT (a)-[:PARENT]-()
RETURN u, a
```
It's like the ```WHERE``` on row 2 here does not have any effect. It is possible that the unwanted Accounts show up because they match the first ```MATCH``` but from here I would like to exclude all Accounts that has any ```:PARENT``` relation.
What I have tried
I have been trying using ```OPTIONAL MATCH```, the ```WITH``` keyword and matching in different orders and variations for the criteria. The three snippets above is the closest I can get to describing where it goes wrong.
Another way to approach this I imagine could be
```MATCH (a: Account)
OPTIONAL MATCH (a)-[p:PARENT]-()
WITH a, p
WHERE p IS NULL
OPTIONAL MATCH (u)-[relu: USER]-(a)-[rela: ADMIN]-(u)
with u, a, relu, rela
WHERE NOT relu IS NULL
AND NOT rela IS NULL
RETURN a, u
```
Here I get too few results. It's missing several accounts that are not having any ```:PARENT``` relation but still has Users with both ```:USER``` and ```:ADMIN``` relations to the account.
Seems like I missed some vital point of how the queries are aggregated.
Comment: Is the direction of the relationships not relevant? That is, `WHERE NOT (a)-[:PARENT]-()` could be formulated as `WHERE NOT (a)-[:PARENT]->()` or reverse, based on your model.
Comment: Okay. Another minor thing that I noticed is that label for `a`, `(a:Account)` is omitted from the 2nd and 3rd queries. I guess it should not make a difference, but still worth noting.
Comment: The `PARENT` relation has a direction but I exclude both incoming and outgoing to get "all accounts that are neither parent accounts nor child accounts".
Comment: I am assuming a variable like `a` is only needed to be declared once, as it is the same object I am referring?
Here is the accepted answer: (This answer is work-in-progress.)
I created a small example data set:
```CREATE
(a:Account {name: 'acc1'}),
(u:User {name: 'user1'}),
(u)-[:ADMIN]->(a),
(u)-[:USER]->(a)
```
This query seems to work. The only difference from you 3rd query is the addition of ```:Account```.
```MATCH (u)-[:ADMIN]-(a:Account)-[:USER]-(u)
WHERE NOT (a)-[:PARENT]-()
RETURN u, a
```
Comment for this answer: Sure. The best is to craft a small example dataset that demonstrates the issue. Alternatively, you can also Twitter DM/email me a zipped graph.db or a GraphML file if its not confidential.
Comment for this answer: If it's not too large, you can export it to a GraphML/CSV and share it on a GitHub Gist, pastebin or similar. Alternatively, you can upload the zip somewhere (maybe Dropbox or similar)
Comment for this answer: Oh wow! Ok so I see now that that declaration does make a difference here! Why is that? These relations only point at Accounts anyway.
Comment for this answer: Running query # 3 with the Account declaration gives me the same results as my alternative query # 4 though. Which I cannot understand why it is wrong. Maybe I can include an example db somehow to show this...?
Comment for this answer: not confidential. This "craft a small example dataset" - How would I best share that inside this SO question?
Comment for this answer: Solved, thanks. My 3rd query gave unexpected results because of not labeling enough. (Still beats me,) My 4th query gave unexpected, but correct (same as yours), results because it also sorts out those Accounts who has Users who are related to more than one account.
|
Title: How to display process title by ps?
Tags: ps
Question: As per setproctitle(3), the process title appears on the ```ps``` command. But after looking up the ps(1), I still have no idea how to display it by ```ps```.
Comment: option `-o comm` will not give expected result ?
Comment: Many things can be learned by reading the fine manual. But it should be done on last resort…
Comment: As @Archemar wrote : Issue ps -eo args,comm,command and observe the misc ways to display the process name. I bet that the second column displays what you wish.
Comment: @Archemar and @MC68020 Now I figure out it. The process title is just displayed by option `-o cmd`...
Here is the accepted answer: Because ```setproctitle()``` indeed modifies the ```argv[0]``` of the calling process, so the process title can be displayed by ```ps -o cmd```.
|
Title: CompletableFutures and exception handling in Java
Tags: java;exception;completable-future
Question: I'm new to the whole CompletableFutures business. Currently, I'm trying to populate a futures list with objects retrieved from a service call and then return the list itself. However, I'm getting
```error: unreported exception InvalidRequestException; must be caught or declared to be thrown``` and ```unreported exception DependencyFailureException; must be caught or declared to be thrown``` errors even though I'm using a try/catch block and declare the exceptions.
Here's what I have so far:
```public List<Dog> getDogs(List<String> tagIds, String ownerId)
throws InvalidRequestException, DependencyFailureException {
List<CompletableFuture<Dog>> futures = new ArrayList<>(tagIds.size());
List<Dog> responses = new ArrayList<>(tagIds.size());
for(String tagId : tagIds) {
futures.add(CompletableFuture.supplyAsync(() -> {
try {
return getDog(tagId, ownerId);
} catch (InvalidRequestException ire) {
throw new InvalidRequestException("An invalid request to getDog", ire);
} catch (DependencyFailureException dfe) {
throw new DependencyFailureException("A dependency failure occurred during a getDog call", dfe);
}
}));
}
return futures.stream()
.map(CompletableFutur181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16join)
.collect(Collectors.toList());
}
```
Any idea what I'm missing?
Comment: Presumably your `InvalidRequestException` and `DependencyFailureException` types are checked. The `Supplier` functional interface does not declare throwing any checked exceptions, let alone your custom ones. You'll need to handle them internally or rethrow them wrapped in an unchecked exception type.
Comment: An example of what? Handling is up to you, whatever process you've setup in your application. Wrapping in an uncheckedexception is also discussed [here](http://stackoverflow.com/questions/484794/wrapping-a-checked-exception-into-an-unchecked-exception-in-java).
Comment: Sotirios Delimanolis, do you have an example of how to go about doing something like that?
|
Title: Rails Payola: add a coupon to an existing subscription
Tags: ruby-on-rails;stripe-payments;payola
Question: We're working on a SaaS product that uses Payola to handle payment, and we'd like to add a referral promotion. Adding the coupon to the referee is simple enough (hidden field on the form with the coupon code), but there doesn't seem to be any obvious way of applying a coupon to an existing subscription.
I've checked the Payola source, and there doesn't seem to be any methods dealing with applying a coupon code to an existing subscription, just for a new one.
Can we just get the Strip181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Customer object and use this answer: How to Apply a Coupon to a Stripe Customer to apply the coupon? Will that mess up Payola at all?
Here is the accepted answer: Since the Payola stores the subscription details in its own table, it isn't sufficient enough to just update subscription on Stripe. Now, if we take a look at the subscription_controller.rb:28 we see what Payola itself does when new coupon is applied (you can apply a new coupon with Payola when changing the plan as can be indirectly seen from before_filter find_plan_coupon_and_quantity called also on change_plan method. The find_plan_coupon_and_quantity method leads to the call ```@coupon = cookies[:cc] || params[:cc] || params[:coupon_code] || params[:coupon]```). What it does is that it calls ```Payol181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16ChangeSubscriptionPlan.call(@subscription, @plan)``` which again is declared in change_subscription_plan.rb:3 and calls another method retrieve_subscription_for_customer on the same file. This method is the key here as it retrieves the actual subscription from the Stripe and returns it. Here is that method for reference
```def self.retrieve_subscription_for_customer(subscription, secret_key)
customer = Strip181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Customer.retrieve(subscription.stripe_customer_id, secret_key)
customer.subscriptions.retrieve(subscription.stripe_id)
end
```
After fetching the subscription it updated according to the new plan details and stored in Payola's own data structures in database.
After this long and exhausting investigation we can see that it should be suitable to update the Stripe manually and then apply same kind of update on Payola subscription as described above. So to answer your original question about messing Payola: Yes, it will mess Payola, but you can fix it manually by copying the code used by Payola itself to keep in sync.
Hope this helps you to achieve the desired functionality and at least directs you to correct direction.
Comment for this answer: Thank you for the answer. For my purposes I was looking to automate adding a coupon because subsequent charges would be less than the first. But I realized that I could use the setup-fee feature by passing that when the subscription is created to increase the cost of the first subscription. Thank you for the investigation, I think it will help others.
|
Title: Two+ node elasticsearch cluster - mirroring?
Tags: elasticsearch
Question: We wish to place an elasticsearch cluster on top of a kubernetes cluster (currently with 2 nodes, but we have plans to increase this).
Is it possible to configure elasticsearch in such a way that every node in the cluster contains identical data? so that if a node is lost then the remaining nodes in the cluster can continue to function?
Here is the accepted answer: Yes, you need to set the number of replicas equal to the number of nodes minus one.
Comment for this answer: This is the command ```curl -XPUT "localhost:9200//_settings?pretty" -H 'Content-Type: application/json' -d' { "number_of_replicas": 1 }'```
Comment for this answer: thanks for the information. We set the ES option "index.number_of_replicas:" in our config file. To update the existing running indices we used a curl XPUT command as described [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#indices-update-settings)
|
Title: Can open blob returned by XMLHTTPRequest, but can't upload to Azure
Tags: javascript;node.js;azure;azure-storage;azure-blob-storage
Question: I am able to upload a file to my vendors API, and the vendor responds with a .png file as binary data. I am able to write this out to a blob in the browser, but I can't get it to upload in Azure blob storage. I also tried uploading it to a Web directory using fs.writefile but that produces a corrupt/non-bitmap image.
Ideally, I would like to upload my blob directly into Azure, but when I try it gives me the following error:
```
TypeError: must start with number, buffer, array or string
```
If I need to upload the blob to a Web directory and use Azure's createBlockBlobFromLocalFile, I would be more than happy to, but my attempts have failed thus far.
Here is my XMLHTTPRequest that opens the image in the browser that is returned after I post my file:
var form = document.forms.namedItem("fileinfo");
form.addEventListener('submit', function (ev) {
var oData = new FormData(form);
```var xhr = new XMLHttpRequest();
xhr.responseType = "arraybuffer";
xhr.open("POST", "http://myvendorsapi/Upload", true);
xhr.onload = function (oEvent) {
if (xhr.status == 200) {
var blob = new Blob([xhr.response], { type: "image/png" });
var objectUrl = URL.createObjectURL(blob);
window.open(objectUrl);
console.log(blob);
var containerName = boxContainerName;
var filename = 'Texture_0.png';
$http.post('/postAdvanced', { containerName: containerName, filename: filename, file: blob }).success(function (data) {
//console.log(data);
console.log("success!");
}, function (err) {
//console.log(err);
});
} else {
oOutput.innerHTML = "Error " + xhr.status + " occurred when trying to upload your file.<br \/>";
}
};
xhr.send(oData);
ev.preventDefault();
}, false);
```
Here is my Node backend for the /postAdvanced call:
```app.post('/postAdvanced', function (req, res, next) {
var containerName = req.body.containerName;
var filename = req.body.filename;
var file = req.body.file;
if (!Buffer.isBuffer(file)) {
// Convert 'file' to a binary buffer
}
var options = { contentType: 'image/png' };
blobSvc.createBlockBlobFromText(containerName, filename, file, function (error, result, response) {
if (!error) {
res.send(result);
} else {
console.log(error);
}
});
})
```
If someone can't help me with uploading directly to Azure, if I can get how to upload this blob to a directory, I can get it into Azure via createBlockBlobFromLocalFile
Comment: When I console.log the file variable it is an empty object. Could it be that I am not passing anything back to node/what can be done?
Here is the accepted answer: I have solved the issue. I needed to base64 encode the data on the client side before passing it to node to decode to a file. I needed to use XMLHTTPRequest to get binary data properly, as jQuery AJAX appears to have an issue with returning (see here: http://www.henryalgus.com/reading-binary-files-using-jquery-ajax/).
Here is my front end:
```var form = document.forms.namedItem("fileinfo");
form.addEventListener('submit', function (ev) {
var oData = new FormData(form);
var xhr = new XMLHttpRequest();
xhr.responseType = "arraybuffer";
xhr.open("POST", "http://vendorapi.net/Upload", true);
xhr.onload = function (oEvent) {
if (xhr.status == 200) {
var blob = new Blob([xhr.response], { type: "image/png" });
//var objectUrl = URL.createObjectURL(blob);
//window.open(objectUrl);
console.log(blob);
var blobToBase64 = function(blob, cb) {
var reader = new FileReader();
reader.onload = function() {
var dataUrl = reader.result;
var base64 = dataUrl.split(',')[1];
cb(base64);
};
reader.readAsDataURL(blob);
};
blobToBase64(blob, function(base64){ // encode
var update = {'blob': base64};
var containerName = boxContainerName;
var filename = 'Texture_0.png';
$http.post('/postAdvancedTest', { containerName: containerName, filename: filename, file: base64}).success(function (data) {
//console.log(data);
console.log("success!");
// Clear previous 3D render
$('#webGL-container').empty();
// Generated new 3D render
$scope.generate3D();
}, function (err) {
//console.log(err);
});
})
} else {
oOutput.innerHTML = "Error " + xhr.status + " occurred when trying to upload your file.<br \/>";
}
};
xhr.send(oData);
ev.preventDefault();
}, false);
```
Node Backend:
```app.post('/postAdvancedTest', function (req, res) {
var containerName = req.body.containerName
var filename = req.body.filename;
var file = req.body.file;
var buf = new Buffer(file, 'base64'); // decode
var tmpBasePath = 'upload/'; //this folder is to save files download from vendor URL, and should be created in the root directory previously.
var tmpFolder = tmpBasePath + containerName + '/';
// Create unique temp directory to store files
mkdirp(tmpFolder, function (err) {
if (err) console.error(err)
else console.log('Directory Created')
});
// This is the location of download files, e.g. 'upload/Texture_0.png'
var tmpFileSavedLocation = tmpFolder + filename;
fs.writeFile(tmpFileSavedLocation, buf, function (err) {
if (err) {
console.log("err", err);
} else {
//return res.json({ 'status': 'success' });
blobSvc.createBlockBlobFromLocalFile(containerName, filename, tmpFileSavedLocation, function (error, result, response) {
if (!error) {
console.log("Uploaded" + result);
res.send(containerName);
}
else {
console.log(error);
}
});
}
})
})
```
|
Title: problem with logout script in php
Tags: php;mysql;html
Question: I'm a beginner in php, and I am trying to create a login and logout. But I am having problems in logging out. My logout just calls for the login form which is this:
```<?
session_start();
session_destroy();
?>
<table width="300" border="0" align="center" cellpadding="0" cellspacing="1" bgcolor="#CCCCCC">
<tr>
<form name="form1" method="post" action="checklogin.php">
<td>
<table width="100%" border="0" cellpadding="3" cellspacing="1" bgcolor="#FFFFFF">
<tr>
<td colspan="3"><strong>Member Login </strong></td>
</tr>
<tr>
<td width="78">Username</td>
<td width="6">:</td>
<td width="294"><input name="myusername" type="text" id="myusername"></td>
</tr>
<tr>
<td>Password</td>
<td>:</td>
<td><input name="mypassword" type="text" id="mypassword"></td>
</tr>
<tr>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td><input type="submit" name="Submit" value="Login"></td>
</tr>
</table>
</td>
</form>
</tr>
</table>
```
My problem is, when I try to press the back button in the browser. Whoever user is using it can still access what is not supposed to be accessed when a user hasn't logged in.
Do I need to add a code on the user page?
I have this code on the user page:
```<?
session_start();
if(!session_is_registered(myusername)){
header("location:main_login.php");
}
?>
```
What can you recommend that I would do so that a script will prompt to enter the username and password again when a user clicks on the back button.
Here is the accepted answer: You've destroyed the session but are using a deprecated function, ```session_is_registered()```, to check whether the user is still authorised. As you can see here, you should not be using this any more.
Instead when the user is authorized on the login page, set ```$_SESSION['user'] = true```. You could also set it to some data about that user. For example, I like to register as much information about the user as possible to prevent querying the database a large number of times in the future.
Then this variable will be unset when you use session_destroy in your logout script. This means that in order to protect a page from a logged out user, you just need to include the following:
```if(!isset($_SESSION['user'])) header("Location: main_login.php");
```
You should also protect your login page from logged in users so that they cannot login, whilst already being logged in:
```if(isset($_SESSION['user']) && $_GET['action'] !== 'logout') header("Location: index.php")
```
This assumes you are using a query string on your login page to determine whether the user is trying to login or logout. If a logged in visitor wants to logout, they will have login.php?action=logout in their url and so will be allowed to logout. If not, they will be prevented from accessing the login page, as they have already logged in, and be sent straight to index.php (or wherever your protected section is).
If your login page is seperate from your logout page, you don't need the $_GET [email protected].
Here is another answer: there are actually two problems... one is that the page may be cached in the browser; the second is that the page may be cached on the server. The more likely is actually is the first one. The best way is to have Pragma: NoCache: no-cache in the section of the page (IIRC, you need to statement; one for HTTP1.0 and one for HTTP1.1, and they require different statements). You need to put it on the pages that are only available for logged in (or at least the most sensitive ones). This will cause the browser to go back to the server.
The other is server caching, which is actually, less likely - you would probably know if you or, say, Zend optimizer did some caching. for this you can use session_unset or session_is_registered. But try Pragma first.
Here is another answer: Some browsers also persist the webpage so that when you click the back button it "loads" immediately. Have you tried any other browsers.
Here is another answer: Unset the variable "myusername" using session_unset()
Here is another answer: Another little tidbit of information from php.net : session destroy
```
session_destroy() destroys all of the data associated with the current session. It does not unset any of the global variables associated with the session, or unset the session cookie. To use the session variables again, session_start() has to be called.
In order to kill the session altogether, like to log the user out, the session id must also be unset. If a cookie is used to propagate the session id (default behavior), then the session cookie must be deleted. setcookie() may be used for that.
```
Here is another answer: Page 1:
while login create a session variable for user
Like:
session_start();
$_SESSION['user']=$userId;
Page2:
when logout unset this session variable also
session_unregister(user); OR unset($_SESSION['user']);
Page 3:
On the remaining pages where you want only logged-in user, use this type of check
if(!isset($_SESSION['user']))
{
header("Location: loginPage.php");
exit();
}
|
Title: Are these Stephen Hawking's statements legitimate?
Tags: cosmology;time;big-bang;laws-of-physics
Question: I am reading 'Brief Answers to the Big Questions' by Stephen Hawking and it seems to me that some of his statements are just hypotheses, but they are written in such a way that they 'sound' like they are supposed to be scientific facts.
For example:
There was no time before the Big Bang.
I do not know much about cosmology so I tried to look it up and to me it looked like there isn't any consensus.
Physical laws are unchangeable and universal.
While this assumption certainly makes life easier and no one has ever seen anything that would disprove it, I am pretty sure that we just don't know that. We don't know whether laws of nature were and will be the same, just as we don't whether they aren't any different in distant galaxies. It's all just convenient assumptions, right? (Btw. yes I know the Occam's razor, but it just triggers me a little that he never uses phrases such as 'I assume' or 'hypothetically', when making this statement.)
Comment: There's quite a bit of evidence that physical laws have not changed for a long time. For example, search for "Oklo natural nuclear reactor".
Comment: [Did Time Start at the Big Bang](https://www.youtube.com/watch?v=K8gV05nS7mc)
Comment: In [Time Reborn](https://en.wikipedia.org/wiki/Time_Reborn), Lee *"Smolin hypothesizes that the very laws of physics are not fixed, but that they actually evolve over time."*
Comment: I'm voting to close this question for the "primarily opinion-based" reason. (It's not that these aren't questions that would make for a great discussion, it's that this site isn't a discussion forum)
Comment: regarding 2): would there be a point in asking: Which statements are considered physical laws depends on the status of current knowledge. Like the "law of constant day length" dropping when tidal friction was understood. Or think of energy conservation of photons in space which are redshifted while space itself is expanding according to modern cosmology. Kind of like a law of nature stepping down as one, as soon as it is superseded by a deeper model of nature. A bit handwavy, but that was what I thought.
Comment: In fairness, we also don't know that we aren't in the Matrix. Generally speaking, pretty much all scientific assertions are preceded by a silent "*According to my/our current understanding of the universe, $[\ldots]$*." It's not a bad thing to want to understand the assumptions upon which scientific assertions rest, but those are generally found in sources without the word "brief" in the title :)
Here is another answer: I think these are more the philosophical type of facts than scientific facts. They stem from the axioms for our reasoning and definitions of the words, and thus cannot be "verified" because, well, they must be believed to be true.
The Big Bang is considered the beginning of everything. When we make this statement, it is implied that we believe Big Bang's existence as a priori; and "everything" means literally everything, including time. So yeah, the Big Bang is the beginning of time, the first event. And before the first event, by definition of the word "first", there cannot be time before the "first" event, otherwise the Big Bang would not be the first event, right?
The physical laws, as a concept, do not change because they are defined as things that are unchangable. Careful that they are different from our perception of them, which is the useful thing that we use to make better of our lives. Sure, constants like pi can change, or maybe the sun starts rising from the West, but that cannot be because the laws suddenly change. It can only be because we are dumb, because our perception of the laws is flawed. The laws are still, as they always have been, by definition, the way they are. It's us who need to update our understanding of them.
Personal opinion: it's all just definition and tautology. Fun to read for a while, important to know, but at some point it becomes useless quarrelling.
Comment for this answer: *Sure, constants like pi can change.* How could $\pi$ change?
Comment for this answer: In General Relativity the ratio of the measured circumference of a physical circle to its measured diameter is not in general exactly $\pi$ because physical space is not Euclidean. But $\pi$ is always 3.14159... . Mathematical constants like $\pi$ are defined by mathematics, and are completely independent of physics. For example, the infinite series $1-1/3+1/5-1/7+...$ sums to $\pi/4$ and this has *nothing* to do with any law of physics. It is true in every possible universe.
Comment for this answer: I'm just making outrageous examples of things that could happen that could potentially turn the world upside down and make us question physical laws and reality. But sure, I think we can find theories out there where pi=3.14 and the ratio of the circumference and diameter of a circle split, can we?
Here is another answer: Light from distant galaxies emitted eons ago has the same spectral lines as from elements on Earth, excepted redshifted due to their journey through the expanding universe. That seems to me like good evidence that atoms then and there were obeying the same laws of physics as atoms here and now.
I can't offer similar evidence for “no time before Big Bang”. That was Hawking’s opinion. Some cosmologists think that our Big Bang may have actually been a Little Bang in a larger multiverse. In that case, there would have been time before our Little Bang.
|
Title: Sequelize chain find with belongsTo
Tags: javascript;node.js;postgresql;sequelize.js
Question: I have three models, Discussion, User and Message, where:
```Message.belongsTo(models.Discussion, {as: 'discussion'})
Discussion.belongsToMany(models.User, {through: models.UserDiscussion})
```
I would like to get all messages concerning a specific user. From the user, I can easily get all the discussions with ```user.getDiscussions()```.
But then I don't know how to find the messages in a single request. Once I have the discussions array, I can call find
```models.Message.findAll({where: {discussionId: discussions[i].id}})
```
for every discussion in the array, but this is asynchronous and I don't know how to chain them, to return only the messages.
Is there no getter with a belongsTo relationship ?
Here is the accepted answer: Where you're searching for your user, you should be able to do something like:
```Discusson.findAll({
// where query
},
{
include: [Message]
})
.then(function(user) {
});
```
|
Title: probleme with AsyncTask: execute
Tags: android;android-asynctask;execute
Question: I tried to use ```AsyncTask``` object to connect my application to a remote server.
Here's my code:
```public class ConnectServer extends AsyncTask<Void, Void, Void> {
String IP = "";
InputStream is = null;
JSONObject json_data = null;
ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>();
ArrayList<String> donnees = new ArrayList<String>();
@Override
protected Void doInBackground( Void... params ) {
// TODO Auto-generated method stub
IP = "http://(816)671-1913/fichier.php";
nameValuePairs.add( new BasicNameValuePair( "nom", envois.nom ) );
nameValuePairs.add( new BasicNameValuePair( "prenom", envois.prenom ) );
nameValuePairs.add( new BasicNameValuePair( "nationalite", envois.nationalite ) );
nameValuePairs.add( new BasicNameValuePair( "passeport", envois.passeport ) );
try {
//commandes httpClient
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost( IP );
httppost.setEntity( new UrlEncodedFormEntity( nameValuePairs ) );
HttpResponse response = httpclient.execute( httppost );
HttpEntity entity = response.getEntity();
is = entity.getContent();
Log.i( "tag", "depuis json" );
}
catch (Exception e) {
Log.i( "taghttppost", "" + e.toString() );
Toast.makeText( c, e.toString(), Toast.LENGTH_LONG ).show();
}
return null;
}
@Override
protected void onPreExecute() {
// TODO Auto-generated method stub
super.onPreExecute();
}
@Override
protected void onPostExecute( Void result ) {
// TODO Auto-generated method stub
super.onPostExecute( result );
Toast.makeText( getApplicationContext(), "fin d'envoi", Toast.LENGTH_SHORT ).show();
}
}
```
Then when i try to call start that task, in my ```onClick()``` method, like this:
```public void onClick( View v ) {
// TODO Auto-generated method stub
ConnectServer cs = new ConnectServer();
cs.execute( (Void) null );
}
```
It doesn't work. I debug it with eclipse, i am sure that the error come from this line:
```cs.execute((Void)null);
```
I try to replace it by
```cs.execute();
```
But the error persists. The debbuger gives me something like:
```Thread [<10> AsyncTask #1] (Suspended (exception RuntimeException))
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1086
ThreadPoolExecutor$Worker.run() line: 561
Thread.run() line: 1096
Thread [<12> AsyncTask #2] (Running)
```
I want to add that my code works perfectly when I use it without ```AsyncTask```.
Comment: Why do you have to strings for the ip, in the global variable you have it equal to nothing.
Comment: TheBluCat, i have just initialyze the IP, but that is not the problem, becauz it worked with this when i removed AsyncTask
Here is another answer: as it turns out sometimes just ```Project->Clean..``` in Eclipse will solve the problem.. ;)
Here is another answer: Try this in ConnectServer class:
```@Override
protected void onPostExecute(Void result) {
Toast.makeText(getApplicationContext(), "fin d'envoi", >Toast.LENGTH_SHORT).show();
}
```
Try this in the Activity:
```@Override
public void onClick(View v) {
ConnectServer cs = new ConnectServer();
cs.execute();
}
```
Comment for this answer: I think i need to precise that the onClick method is called in a Actvivty thread, not in ConnectServer
Comment for this answer: OK! today it works! I don't why but it works,i didn't change anything,but suddently works :):):):) Thank u guys
|
Title: Parameter in web api method call is coming in null
Tags: c#;jquery;asp.net-mvc;asp.net-web-api
Question: Writing a web api project and one of the parameters (which is a json array) on my method is coming into the api null. The jquery I'm making the call with looks like this:
```<script>
$(document).ready(function () {
$('#btnSubmit').click(function () {
var jsRequestAction = {
appId: 'appName',
custId: 'custId',
oprId: 'oprId',
businessProcess: 'Requisition',
action: 'Approve',
actionKeys: [
'blah blah 1',
'blah blah 2',
'blah blah 3'
]
};
$.ajax({
type: "POST",
content: "json",
url: "http://localhost/api/appName/custId/oprId",
contentType: "application/json; charset=utf-8",
data: JSON.stringify({ requestAction: jsRequestAction })
});
});
});
</script>
```
My web api method looks like this:
```public IList<ResponseAction> ActionCounter(string appName, string custCode, string custUserName, RequestAction requestAction)
{
IList<ResponseAction> actionResponseList = new List<ResponseAction>();
var conn = new SqlConnection(ConfigurationManager.ConnectionStrings["conn"].ConnectionString);
conn.Open();
try
{
foreach (string s in requestAction.actionKeys)
{
var command = new SqlCommand
{
CommandText = "Sql statement",
Connection = conn
};
command.ExecuteNonQuery();
var reply = new ResponseAction();
reply.responseActionKey = s;
reply.responseMessage = "Success";
actionResponseList.Add(reply);
}
return actionResponseList;
}
finally
{
conn.Close();
conn.Dispose();
}
}
```
RequestAction model:
```public class RequestAction
{
public string appId { get; set; }
public string custId { get; set; }
public string oprId { get; set; }
public string businessProcess { get; set; }
public string action { get; set; }
public string[] actionKeys { get; set; }
public string actionKey { get; set; }
}
```
When I debug, I step through the method and when I get to the foreach loop, I get a null object reference. Looking in my locals section, all my properties for requestAction are null. I have tried prefixing the object with the [FromBody] tag to no avail after reading a few related articles. Any help would be appreciated.
Here is the accepted answer: I found the answer to my question HERE. It was a matter of changing this:
```$.ajax({
type: "POST",
content: "json",
url: "http://localhost/api/appName/custId/oprId",
contentType: "application/json; charset=utf-8",
data: JSON.stringify({ requestAction: jsRequestAction })
});
```
to this:
```$.ajax({
type: "POST",
content: "json",
url: "http://localhost/api/appName/custId/oprId",
data: jsRequestAction
});
```
Otherwise, the data won't bind to my model in the controller and everything will be nulled out.
Here is another answer: You need to make sure the object server side matches the object you are creating client side so that the request can be serialized directly into the object.
So your action method will look like this:
```public IList<ResponseAction> ActionCounter(RequestAction requestAction)
{
// Do stuff
}
```
Where RequestAction should match the javascript object you are creating.
Comment for this answer: Took out all params except requestAction from the web api method and I'm still getting nulls. I should mention that I am moving this method over from a WCF rest service I built previously that worked, but I'm having the issue in web api.
|
Title: C++ Sieve of Atkin Returning Several Composites
Tags: c++;algorithm;primes;sieve-of-atkin
Question: I have made my own implementation of the Sieve of Atkin in C++, it generates primes fine until about 860,000,000. Around there and higher the program begins to return several composites, or so I think. I have a variable inside the program the counts the number of primes found, and at ~860,000,000 the count is more than it should be. I checked my count against a similar program for the Sieve of Eratosthenes, and several internet resources. I am new to programming so it is likely a stupid mistake.
Anyway, here it is:
```#include <iostream>
#include <math.h>
#include <time.h>
int main(int argc, const char * argv[])
{
long double limit;
unsigned long long int term,term2,x,y,multiple,count=2;
printf("Limit: ");
scanf("%Lf",&limit);
int root=sqrt(limit);
int *numbers=(int*)calloc(limit+1, sizeof(int));
clock_t time;
//Starts Stopwatch
time=clock();
for (x=1; x<root; x++) {
for (y=1; y<root; y++) {
term2=4*x*x+y*y;
if ((term2<=limit) && (term2%12==1 || term2%12==5)){
numbers[term2]=!numbers[term2];
}
term2=3*x*x+y*y;
if ((term2<=limit) && (term2%12==7)) {
numbers[term2]=!numbers[term2];
}
term2=3*x*x-y*y;
if ((term2<=limit) && (x>y) && (term2%12==11)) {
numbers[term2]=!numbers[term2];
}
}
}
//Print 2,3
printf("2 3 ");
//Sieves Non-Primes That Managed to Get Through
for (term=5; term<=root; term++) {
if (numbers[term]==true) {
multiple=1;
while (term*term*multiple<limit){
numbers[term*term*multiple]=false;
multiple++;
}
}
}
time=clock()-time;
for (term=5; term<limit; term++) {
if (numbers[term]==true) {
printf("%llu ",term);
count++;
}
}
printf("\nFound %llu Primes Between 1 & %Lf in %lu Nanoseconds\n",count,limit,time);
return 0;
}
```
Comment: Yes the maximum is limit, and if 4*x*x+y*y > limit then the value is useless because it is out of range of the limit.
Comment: I don't think it is either, but calloc (allocates memory) must use a pointer.
Comment: you could have `unsigned char *` too, no need for ints. but better, use `vector`. No manual `malloc` calls, and it is bit-packed.
Comment: what is maximum `limit`?
Comment: what if ` 4*x*x+y*y > limit ` ?
Comment: Probably not related to the question, but why do you use int* numbers for storing bool values?
Here is the accepted answer: From wikipedia ,
```The following is pseudocode for a straightforward version of the algorithm:
// arbitrary search limit
limit ← 1000000
// initialize the sieve
for i in [5, limit]: is_prime(i) ← false
// put in candidate primes:
// integers which have an odd number of
// representations by certain quadratic forms
for (x, y) in [1, √limit] × [1, √limit]:
n ← 4x²+y²
if (n ≤ limit) and (n mod 12 = 1 or n mod 12 = 5):
is_prime(n) ← ¬is_prime(n)
n ← 3x²+y²
if (n ≤ limit) and (n mod 12 = 7):
is_prime(n) ← ¬is_prime(n)
n ← 3x²-y²
if (x > y) and (n ≤ limit) and (n mod 12 = 11):
is_prime(n) ← ¬is_prime(n)
// eliminate composites by sieving
for n in [5, √limit]:
if is_prime(n):
// n is prime, omit multiples of its square; this is
// sufficient because composites which managed to get
// on the list cannot be square-free
for k in {n², 2n², 3n², ..., limit}:
is_prime(k) ← false
print 2, 3
for n in [5, limit]:
if is_prime(n): print n
```
for (x, y) in [1, √limit] × [1, √limit]: is your problem .
You have used :
```for (x=1; x<root; x++)
for (y=1; y<root; y++)
```
Instead use :
```for (x=1; x<=root; x++)
for (y=1; y<=root; y++)
```
Hope this helps !
Comment for this answer: Thanks! Works perfectly!
|
Title: Passing filter to directive from controller angularjs
Tags: angularjs;angularjs-directive
Question: I have a directive called ```myCustomDirective``` and it takes a list of value and a filter function.
```myapp.directive('myCustomDirective', function ($rootScope) {
return {
replace: true,
restrict: 'E',
scope: {
values: "=",
customfilter: "&"
},
templateUrl: '/Scripts/template.html',
link: function (scope, elm, attr) {
}
}
});
```
And in my directive template ```template.html```, I have a simple list which will be filtered as filter function the directive got from client code.
```<tbody>
<tr ng-repeat="item in values | filter:customFilter>
<td ng-repeat="key in displaykey">{{item.name}}</td>
</tr>
</tbody>
```
In my controller, the custom function is
```$scope.myCustomFilter = function(value)
{
return function(item)
{
if(item.name contains value) return true;
return false;
}
}
```
From HTML of client code, I want to pass the value and function by which the directive will filter the list.
```<div>
<my-custom-directive customfilter="myCustomFilter("a")"></my-custom-directive>
</div>
```
How can I get it right?
Comment: Its good to do filtering in directive itself , rather than in html and making code complex
Comment: So, what is the problem? What's not working? Any error messages?
Comment: @Pop-A-Stash No error actually, but the list is not filtered as expected.
|
Title: Load QTableWidgets in a Scroll Area on Button Push
Tags: python;python-3.x;pyqt;pyqt5
Question: I have a method named 'test()' that loads 3 one-row tables into a scrollbar.
For some reason that I cannot figure out, however, the while it works if I simply activate test() on load, it doesn't work if I comment it out and then try to activate it via the push of a button.
Here is the main module (with test())
```from PyQt5 import QtCore, QtWidgets
from design import Ui_MainWindow
class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self, *args, **kwargs):
QtWidgets.QMainWindow.__init__(self, *args, **kwargs)
self.setupUi(self)
#test(self)
def test(self):
from random import randint
x = randint(0, 99)
print(x)
height = 30
yPos = 0
for i in range(3):
rowVals = ['test%s' % str(x + i)]
qTbl = QtWidgets.QTableWidget(self.sawDoc)
qTbl.setObjectName("tbl%s" % (i))
qTbl.setGeometry(QtCore.QRect(0, yPos, 880, height))
qTbl.horizontalHeader().setVisible(False)
qTbl.verticalHeader().setVisible(False)
yPos += height
qTbl.setRowCount(1)
qTbl.setColumnCount(len(rowVals))
for r, cell in enumerate(rowVals):
item = QtWidgets.QTableWidgetItem()
item.setText(str(cell))
qTbl.setItem(0, r, item)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
w = MainWindow()
w.show()
sys.exit(app.exec_())
```
And here is the design module (with the button)
```from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(896, 453)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.scrDoc = QtWidgets.QScrollArea(self.centralwidget)
self.scrDoc.setGeometry(QtCore.QRect(0, 0, 891, 391))
self.scrDoc.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.scrDoc.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.scrDoc.setWidgetResizable(False)
self.scrDoc.setObjectName("scrTest")
self.sawDoc = QtWidgets.QWidget()
self.sawDoc.setGeometry(QtCore.QRect(0, 0, 869, 300))
self.sawDoc.setObjectName("sawDoc")
self.scrDoc.setWidget(self.sawDoc)
self.btnTest = QtWidgets.QPushButton(self.centralwidget)
self.btnTest.setGeometry(QtCore.QRect(430, 400, 80, 15))
self.btnTest.setObjectName("btnTest")
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
self.btnTest.clicked.connect(self.test)
def test(self):
import main
main.test(self)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "昊珩のCAT工具"))
self.btnTest.setText(_translate("MainWindow", "Test"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
```
Simply nothing happens when I push the button (except for the print out working successfully).
Can someone tell what's wrong?
Here is the accepted answer: The parents show the children at the beginning, but afterwards it is the responsibility of the children to show themselves, the simple solution is to use ```show()```:
```qTbl = QtWidgets.QTableWidget(self.sawDoc)
qTbl.show()
```
But I see that you are implementing the solution in an inelegant way, the connection do it in main.py, do not modify the file generated by Qt Designer (delete the connection and the test method in design.py)
You must use a layout and there add the QTableWidget.
```class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self, *args, **kwargs):
QtWidgets.QMainWindow.__init__(self, *args, **kwargs)
self.setupUi(self)
self.btnTest.clicked.connect(self.test)
def test(self):
from random import randint
x = randint(0, 99)
lay = QtWidgets.QVBoxLayout(self.sawDoc)
for i in range(3):
rowVals = ['test%s' % str(x + i)]
qTbl = QtWidgets.QTableWidget()
qTbl.setObjectName("tbl%s" % (i))
qTbl.horizontalHeader().hide()
qTbl.verticalHeader().hide()
qTbl.setRowCount(1)
qTbl.setColumnCount(len(rowVals))
for r, cell in enumerate(rowVals):
item = QtWidgets.QTableWidgetItem()
item.setText(str(cell))
qTbl.setItem(0, r, item)
qTbl.resize(qTbl.sizeHint())
lay.addWidget(qTbl)
```
|
Title: Custom keyboard clear button
Tags: iphone;iphone-softkeyboard
Question: In my application i create a custom Numberpad .How can i delete whole contents of the textfield on continuos tap on the clear button. Any idea?
Here is the accepted answer: This was a fun one.
Basically what I did was write a method to pull the last character out of the textField's text String.
I added another method to be fired on the Button's touchDown event, which first calls the method to erase the last character and then starts a timer to start repeating. Because the delay before repeating (at least on the native keyboard) is longer than repeat delay, I use two timers. The first one's repeat option is set to NO. it calls a method that starts a second timer that repeats to call the erase last character method repeatedly.
In addition to the touchDown event. We also register for the touchUpInside event. When fired it calls a method that invalidates the current timer.
``` #import <UIKit/UIKit.h>
#define kBackSpaceRepeatDelay 0.1f
#define kBackSpacePauseLengthBeforeRepeting 0.2f
@interface clearontapAppDelegate : NSObject <UIApplicationDelegate> {
UIWindow *window;
NSTimer *repeatBackspaceTimer;
UITextField *textField;
}
@property (nonatomic, retain) IBOutlet UIWindow *window;
@property (nonatomic, retain) NSTimer *repeatBackspaceTimer;
@end
@implementation clearontapAppDelegate
@synthesize window,
@synthesize repeatBackspaceTimer;
- (void)applicationDidFinishLaunching:(UIApplication *)application {
textField = [[UITextField alloc] initWithFrame:CGRectMake(10, 40, 300, 30)];
textField.backgroundColor = [UIColor whiteColor];
textField.text = @"hello world........";
UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
button.frame = CGRectMake(10, 80, 300, 30);
button.backgroundColor = [UIColor redColor];
button.titleLabel.text = @"CLEAR";
[button addTarget:self action:@selector(touchDown:) forControlEvents:UIControlEventTouchDown];
[button addTarget:self action:@selector(touchUpInside:) forControlEvents:UIControlEventTouchUpInside];
// Override point for customization after application launch
[window addSubview:textField];
[window addSubview:button];
window.backgroundColor = [UIColor blueColor];
[window makeKeyAndVisible];
}
-(void) eraseLastLetter:(id)sender {
if (textField.text.length > 0) {
textField.text = [textField.text substringToIndex:textField.text.length - 1];
}
}
-(void) startRepeating:(id)sender {
NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:kBackSpaceRepeatDelay
target:self selector:@selector(eraseLastLetter:)
userInfo:nil repeats:YES];
self.repeatBackspaceTimer = timer;
}
-(void) touchDown:(id)sender {
[self eraseLastLetter:self];
NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:kBackSpacePauseLengthBeforeRepeting
target:self selector:@selector(startRepeating:)
userInfo:nil repeats:YES];
self.repeatBackspaceTimer = timer;
}
-(void) touchUpInside:(id)sender {
[self.repeatBackspaceTimer invalidate];
}
- (void)dealloc {
[window release];
[super dealloc];
}
@end
```
Comment for this answer: Ok, now I understand. I wrote a sample app to do that and replaced my post with it above.
Comment for this answer: The above code will clear all texts at a single tap. I need exactly,what iPhone keyboards clear button does. That is at a single tap delete only one character, and on continous click clear all contents.
Comment for this answer: very nice!!! Thank you very much... Now i got what i exaclty want. Thanks again!!!
Here is another answer: On touchUpInside, delete a single character.
On touchDownRepeat, delete the whole word or field (I think the iPhone's delete deletes a word at a time first)
Comment for this answer: Touchdown repeat will delete character on clicking button two times. But i am not leaving control from my button. Actually it is a single tap .
|
Title: DEV C++Crear un atributo arreglo dentro de una clase que reciba objetos de otra clase
Tags: c++
Question: En DEV C++ necesito crear 2 clases, y que una de estas clases tenga un atributo que reciba objetos de la otra clase.. No tengo ni idea sobre como declarar este atributo arreglo, y luego como llenarlo con los objetos de la otra clase.. Si alguien programara algo para guiarme o me explicara como se crean los arreglos para recibir objetos de otra clase le agredeceria de todo corazon :c, saludos y gracias aunque no me ayuden gracias por intentarlo!
Comment: Por favor, aclara tu problema específico o proporciona detalles adicionales para resaltar exactamente lo que necesitas. Tal como está escrito, es difícil saber exactamente qué estás preguntando.
Comment: Mira [ask] para que tu pregunta sea mejor recibida. También, aprovecha y haz el [tour] para entender mejor cómo funcionamos y de paso obtener tu primera [medalla](https://es.stackoverflow.com/help/badges)! Que investigaste respecto del pasaje de datos entre clases?
Here is another answer: Pues segun lo que entendi de tu pregunta es que quieres usar atributos de otras clases verdad?, puedes hacer herencia la cual trata de dar todos los atributos y metodos de una clase a una clase hija.
Si eso no es lo que buscas entonces talvez lo que necesites es hacer clases amigas la cual trata de que tanto una clase independiente puede acceder a otra clase, aclara un poco mas tu pregunta para poder ayudarte mejor.
herencia:
```class hija: public padre{
};
```
clases amigas:
```//esto define a que dicha clase o funcion sera amiga de esta
friend void amigo_suma();
```
la funcion amigo_suma() ya puede acceder a los atributos de la clase enlazada
|
Title: i blocked out the character according to parts. Then I joined the torso and legs, shaded it smooth, and added a subdivision mod. what's this crease?
Tags: normals;subdivision-surface
Question: I want to get rid of it but its frustrating me :<
Comment: Hello, maybe you have inverted normals? In Edit mode, select all and press Shift N. If it doesn't work you may have inner faces or overlapping vertices...
Comment: @moonboots It's fixed now, I inverted the normals :> thank you
Comment: Does this answer your question? [Subdivision Surface seam artifact?](https://blender.stackexchange.com/questions/231650/subdivision-surface-seam-artifact)
Here is the accepted answer: You have inverted normals, in Edit mode, select all and press ShiftN to recalculate the normals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.