_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d8601 | train | This is likely because you are using your AppIcon image, which is a fully opaque image, i.e. no part of that image is transparent or has alpha=0.
To get the desired effect, you have to use a different image which is partly (or mostly) transparent. The native in-call UI will only use the alpha channel of the image you provide, so it ignores colors. I suggest following the example in the Speakerbox sample app and providing a secondary PNG image in your image asset catalog with transparency.
A: You need to use below code in order to set app name and app icon for incoming VOIP calls
let localizedName = NSLocalizedString("App-Name", comment: "Name of application")
let providerConfiguration = CXProviderConfiguration(localizedName: localizedName)
providerConfiguration.iconTemplateImageData = UIImage.init(named: "appicon-Name")?.pngData()
Note: Your app icon must be transparent. result will be like in below image | unknown | |
d8602 | train | The problem with your first xpath is probably that the i element is not nested in any span elements.
Maybe it is not necessary to specify the full path to the element, because it's class document-icon-eye is more or less sufficient identificator in your concrete scenario. You could use something like this:
//div[@id='DocSelected-1']//i[contains(@class, 'document-icon-eye')]
I suggest you to use the built in xpath search tool in the browser developer tools. Pasting the candidate xpath in the search field (Ctrl + F in the Elements view) will quickly show you if the xpath provided will work properly. Probably this could save you a lot of time from successive trial and error compilations and executions of your code.
After the edit the correct versions of your xpaths would be:
//div/i
//div[@id='DocSelected-1']//i[@class='document-icon-eye dl-padding-right-10 pull-right'] | unknown | |
d8603 | train | There is actually a better way to solve this problem. I ran into the same issue and a type cast inside every derived subscriber class was not an option.
Just update the abstract UseCase class with an generic type parameter.
abstract class UseCase<T>(private val threadExecutor: IThreadExecutor,
private val postExecutionThread: IPostExecutionThread) {
private var subscription = Subscriptions.empty()
fun execute(UseCaseSubscriber: rx.Subscriber<T>) {
subscription = buildUseCaseObservable()
.subscribeOn(Schedulers.from(threadExecutor))
.observeOn(postExecutionThread.getScheduler())
.subscribe(UseCaseSubscriber)
}
protected abstract fun buildUseCaseObservable(): Observable<T>
fun unsubscribe() {
if (!subscription.isUnsubscribed) {
subscription.unsubscribe()
}
}
}
When you declare your derived UseCase classes, use your concrete type for the generic parameter when calling the super class.
class ConcreteUseCase(val threadExecutor: IThreadExecutor,
val postExecutionThread: IPostExecutionThread)
: UseCase<ConcreteType>(threadExecutor, postExecutionThread)
Doing so, you can use typed Subscribers in your execute call.
getNewsListInteractor.execute(NewsListSubscriber())
...
private inner class NewsListSubscriber : rx.Subscriber<List<NewsModel() {
override fun onCompleted() {// TODO}
override fun onError(e: Throwable) {// TODO}
override fun onNext(t: List<NewsModel>) {// TODO}
}
A: I found the solution that is pretty simple actually: my NewsListSubscriber class has to extends from rx.Subscriber<Any> instead of rx.Subscriber<MyWantedClass>. It means I need to cast the received objects to the wanted type.
private inner class NewsListSubscriber : DefaultSubscriber<Any>() {
override fun onCompleted() {}
override fun onError(e: Throwable) {}
override fun onNext(t: Any?) {
val newsList = t as List<News>
...
}
}
In Java the cast is done in background but in Kotlin we need to do it ourself.
I also removed all "in" or "out" keywords in my UseCase class. | unknown | |
d8604 | train | Very Simple use toggle() intead of show()/hide(), toggle() makes element visible if it is hide and hide it if it is visible.
<script type='text/javascript'>;
function toggleReport(element_ID){
$("#"+element_ID).toggle();
}
</script>
If you want to hard code the Element ID than use following script
<script type='text/javascript'>
function toggleReport(){
$("#table_1").toggle();
}
</script>
Cheers, and dont forgot to vote up my answer
:)
A: If you're passing an element reference in, use that as your selector:
function toggleReport(table){
$(table).toggle();
}
Note I'm using .toggle(), which will do exactly what you're attempting to do manually. If you wanted to log the new state, you can do so in the callback:
function toggleReport( table ) {
$( table ).toggle('fast', function(){
console.log( "Element is visible? " + $(this).is(":visible") );
});
}
Note, if you're passing in the ID of the element, your selector will need to be modified:
$( "#" + table ).toggle();
A: There's a prebuilt Jquery function for this.
A: You can use toggle function to achieve this....
$(selector).toggle();
A: demo http://jsfiddle.net/uW3cN/2/
demo with parameter passed http://jsfiddle.net/uW3cN/3/
Good read API toggle: http://api.jquery.com/toggle/
code
function toggleReport(){
//removed the argument
$('#table_1').toggle();
}
another sample rest html is in here http://jsfiddle.net/uW3cN/3/
function toggleReport(tableid){
//removed the argument
$('#'+tableid).toggle();
}
A: The mistake you made in your original function is that you did not actually use the parameter you passed into your function.You selected "#table" which is simply selecting a element on the page with the id "table". It didn't reference your variable at all.
If your intention was to pass an ID into the selector should be written, jQuery("#" + table). If table is instead a reference to a DOM element, you would write jQuery(table) (no quotes and no pound sign). | unknown | |
d8605 | train | No. Objects have no knowledge of what variables and properties (there can be multiple) they are assigned to.
A: There is absolutely no way of doing this.
When you do
LHS = RHS
RHS has no clue of what its result is going to be assigned to. | unknown | |
d8606 | train | The control needs to be instanciated. If you placed it on a dialog template then opening the dialog will create the control. The other approach is to call the CreateControl method, which you can find in the h file of the control's wrapper class. | unknown | |
d8607 | train | There are various ways to make two windows talk with each other (through server, with cookies, using FileSystem API or Local Storage.
Locale Storage is by far the easiest way to talk between two windows who come from the same domain, but it is not supported in older browsers. Since you need to contact the server anyway to find out when someone has registered, I recommend using the server (Ajax//web sockets).
Here's a somewhat simple AJAX solution which you could use in your random page:
(function(){
var formOpened = false;
var registered = false;
//Change to whatever interaction is needed to open the form
$('#registerbutton').on('click', function(){
if (!formOpened){
formOpened = true;
//Will try to poll the server every 3 seconds
(function pollServer(){
$.ajax({
url: "/ajax/pollregistered",
type: "GET",
success: function(data) {
if (data.registered){
registered = true;
//do other stuff here that is needed after registration
}
},
dataType: "json",
complete: setTimeout(function() {if (!registered) pollServer();}, 3000),
//will timeout if request takes longer than 2 seconds
timeout: 2000,
})
})();
//opens the new tab
var win = window.open('/registrationform', '_blank');
win.focus();
}
});
})();
You can then use the session key of the visitor in the '/ajax/pollregistered' action to write your backend code to check if and when this visitor registered as user to your site.
If you don't mind using the local storage, you could poll the storage instead of the server and have the registration page write to local storage upon registration. This is less stressing for your server, but might not work for users with ancient browsers.
Also worth nothing that this method of opening a new tab is not completely reliable. Technically there is not a sure shot way of opening a new tab with javascript. It depends on browser and the preferences of the user. Opening a new tab might get blocked for users who are using anti popup plugins.
That being said, I strongly recommend revising your design and try finding a way to work with just one window instead. It's probably better for both yourself and your users. | unknown | |
d8608 | train | You are keeping a copy of your persisted waitinglist variable around between page loads. When your new page is rendered for the second time, since the waiting list has already been persisted, it is doing all the magical default Rails behaviours, which include updating labels for the submit button (create vs update), and the form's method (post vs patch).
You will want to create a new waitinglist if you are going to re-render the new page:
def create
@waitinglist = Waitinglist.new(waitinglist_params)
if @waitinglist.save
@waitinglist = Waitinglist.new #
flash[:notice] = "You have been added to the waiting list"
render :new
else
render :new
end
end
A: I think what u should do is if the waiting list is saved, redirect to new instead of render new. When u redirect to action it will call the action so a new object will be created. When u do render it render the view and I will have your persisted object, that’s why it is trying to update. | unknown | |
d8609 | train | https://ionicframework.com/docs/v2/api/platform/Platform/
width()
Gets the width of the platform’s viewport using window.innerWidth. Using this method is preferred since the dimension is a cached value, which reduces the chance of multiple and expensive DOM reads.
height()
Gets the height of the platform’s viewport using window.innerHeight. Using this method is preferred since the dimension is a cached value, which reduces the chance of multiple and expensive DOM reads.
*
*Please try using window.innerHeight and window.innerWidth to get height and width of the device at first.
*Try
// Add readySource to check if platform ready was used. The resolved value is the readySource, which states which platform ready was used.
// For example, when Cordova is ready, the resolved ready source is cordova. The default ready source value will be dom.
import { Component } from '@angular/core';
import { Platform } from 'ionic-angular';
@Component({...})
export MyApp {
constructor(platform: Platform) {
platform.ready().then((readySource) => {
console.log('Platform ready from', readySource);
console.log('Width: ' + platform.width());
console.log('Height: ' + platform.height());
});
}
*Remove Crosswalk in Ionic 2 to check if system webview can get correct results, then it should be Crosswalk issue rather than Ionic issue.
Relevant topic: https://forum.ionicframework.com/t/how-to-get-device-width-and-height/28372 | unknown | |
d8610 | train | All linearring must have 3 point at least and also their first point and last point must be same.
in this example its true but may be your file contains wrong one. | unknown | |
d8611 | train | try this:
var width=$('.main').width();
var height=$('.main').height();
$('a').css(
{
"position":"absolute",
"bottom":"50%",
"margin-top":(height/2),
"left":(width/2)-50
});
DEMO
UPDATE
In CSS
.main a{
bottom:50%;
margin-top:-150px;
position:absolute;
left:75px;
}
DEMO
A: You can set all in css like this :
a{
position : absolute ;
height : 10px;
width : 100px;
top : 50%;
margin-top : -5px;
left : 50%;
margin-left : -50px;}
Demo
A: try this to make it horizontally center
a{
position: absolute;
text-align: center;
width: 100%;
}
OR this to make it horizontally and vertically both center
a {
height: 100%;
position: absolute;
text-align: center;
top: 50%;
width: 100%;
}
A: .main img { top:50%; left:50%; margin: -11px 0 0 -50px; position: absolute;}
Is this you want?
A: If your height is fixed, you could use something like this maybe?
CSS:
#div1 {
width:100%;
height:100px;
text-align:center;
}
#div1 img {
line-height:100px;
}
HTML:
<div id="div1">
<img src="yourimage.jpg" />
</div> | unknown | |
d8612 | train | Maybe a better question would be "How many different tools could you use for the job?"
I'd probably go with awk as the easiest tool that does the job reasonably simply:
awk -F, 'NR == 1 { print; OFS="," } NR > 1 { sub(/^ +/, "&Prefix-", $3); print }'
The sub operation adds Prefix- after the spaces at the start of column 3. The code does not attempt to adjust the content of line 1 (the heading); if you want spaces added after $3, then I suppose this does the job (because of the placement of commas, you prefix the extra spaces to column 4 of line 1):
awk -F, 'NR == 1 { OFS=","; $4 = " " $4; print }
NR > 1 { sub(/^ +/, "&Prefix-", $3); print }'
Do you know how to do the same thing with sed?
Yes, like this:
sed -e ' 1s/^\(\([^,]*,[[:space:]]*\)\{3\}\)/\1 /' \
-e '2,$s/^\(\([^,]*,[[:space:]]*\)\{2\}\)/\1Prefix-/' "$@"
The first expression deals with the first line; it puts as many spaces as there are in the prefix (here that's "Prefix-" so it's 7 spaces) after the third column. The second expression deals with the remaining lines; it adds the prefix before the third column.
To deal with column N instead of column 3, change the 3 to N and the 2 inside \{2\} to N-1.
I rechecked the second Awk script; it produces the correct output for me on the sample data from the question. So, within its limitations, does the first Awk script. Make sure you're using something other than the C shell (it gets upset by multi-line quoted strings), and that you were careful with your copying.
Example output
$ cat data
column-1, column-2, column-3, column-4, column-5
Row-1-c1, Row-1-c2, Row-1-c3, Row-1-c4, Row-1-c5
Row-2-c1, Row-2-c2, Row-2-c3, Row-2-c4, Row-2-c5
Row-3-c1, Row-3-c2, Row-3-c3, Row-3-c4, Row-3-c5
Row-4-c1, Row-4-c2, Row-4-c3, Row-4-c4, Row-4-c5
Row-5-c1, Row-5-c2, Row-5-c3, Row-5-c4, Row-5-c5
$ bash manglesed.sh data
column-1, column-2, column-3, column-4, column-5
Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5
Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5
Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5
Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5
Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5
$ bash mangleawk.sh data
column-1, column-2, column-3, column-4, column-5
Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5
Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5
Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5
Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5
Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5
$ cat manglesed.sh
sed -e ' 1s/^\(\([^,]*,[[:space:]]*\)\{3\}\)/\1 /' \
-e '2,$s/^\(\([^,]*,[[:space:]]*\)\{2\}\)/\1Prefix-/' "$@"
$ cat mangleawk.sh
awk -F, 'NR == 1 { OFS=","; $4 = " " $4; print }
NR > 1 { sub(/^ +/, "&Prefix-", $3); print }' "$@"
$ awk -F, 'NR == 1 { print; OFS="," } NR > 1 { sub(/^ +/, "&Prefix-", $3); print }' data
column-1, column-2, column-3, column-4, column-5
Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5
Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5
Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5
Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5
Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5
$ | unknown | |
d8613 | train | All you need is a reference to the other button, then you can do other_button.text = 'whatever'.
The way to do this depends on how you've constructed the program. For instance, if you constructed in the program in kv language, you can give your buttons ids with id: some_id and refer to them in the callback with stuff like on_press: some_id.do_something().
In pure python, you could keep references to the button in the parent class when you create them (e.g. self.button = Button()) so that the callback can reference self.button to change it. Obviously that's a trivial example, but the general idea lets you accomplish anything you want.
A: Probably not the official way, but try out the following code. It will change the text property of the buttons...
Ezs.kv file:
#:kivy 1.8.0
<Ezs>:
BoxLayout:
orientation: 'vertical'
padding: 0
spacing: 6
#choose
Button:
id: btn_1
text: 'text before'
on_press: btn_2.text = 'Whatever'
on_release: self.text = 'Who-Hoo'
#choose
Button:
id: btn_2
text: 'Press this'
on_release: self.text = 'HEEYY'
on_press: btn_1.text = 'text after'
.py file:
class Ezs(BoxLayout):
class EzsApp(App):
def build(self):
return Ezs
if __name__ == '__main__':
EzsApp().run() | unknown | |
d8614 | train | But 2) is the preferred way according to Apple's HIG:
Even though your application does not run in the background when the user switches to
another application, you are encouraged to make it appear as if that is the case. When your
application quits, you should save out information about your application’s current state in
addition to any unsaved data. At launch time, you should look for this state information and
use it to restore your application to the state it was in when it was last used. Doing so
provides a more consistent user experience by putting the user right back where they were
when they last used your application. Saving the user’s place in this way also saves time by
potentially eliminating the need to navigate back through multiple screens’ worth of
information each time an application is launched.
As far as the technical implementation, it's exactly the same: push your subviews onto the navigation controller. The only thing I'd do differently is not animate the 'drilling down.'
A: When your app starts up, it has an initial 'look', screen, view, whatever.
When it starts back up and has a previous context, add a link or button naming the previous context that allows returning there. | unknown | |
d8615 | train | Check the git-pull documentation:
--ff
--no-ff
--ff-only
Specifies how a merge is handled when the merged-in history is already a descendant of the current history. --ff is the default
unless merging an annotated (and possibly signed) tag that is not
stored in its natural place in the refs/tags/ hierarchy, in which case
--no-ff is assumed.
--ff is used as default, which resolves the merge as a fast-forward if possible, and creates a merge commit when not possible. | unknown | |
d8616 | train | Attach your state param to the auth request itself, don’t put it in the redirect_uri param. Then the state param is automatically sent back to the redirect uri.
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id={client_id}&scope=user.read&response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A8000%2F1%2Ffrontend%2Flogin&state=xyz | unknown | |
d8617 | train | I think there are two ways of doing this :
*
*Use your raspberry pi as a web server too : install Nginx/Apache for example (they are web servers) and give them your React app.
*Use a external hosting, like OVH for example, and give them your React app too.
I don't know if you know how to do a React website, but there are plenties of tutorials on web, like this one.
The goal here is to create an API relation between your NodeJS application and your website. The NodeJS server has to be listening on a port (8080 for example) and specific URLs which corresponds to commands (/api/reboot will reboot the app for example). And in your website, you just have to call those URLs after a button is pushed (a 'Reboot' button for example, will send a POST request to http://raspberrypi:8080/api/reboot).
Basically, link every command you want to execute with your NodeJS application to an url and link it in your website to an action.
If you want to securise the transmission (so nobody can reboot your app), just include some password and HTTPS :)
See ya !
A: Here is the link: MyExample
Also recommending to add module child-process to use execute command like this:
var exec = require('child_process').exec;
execute('sudo reboot'); use this when receiving socket
function execute(command) {
var cmd = exec(command, function(error, stdout, stderr){
console.log("error: ", error);
});
}
With this u can make client side with "terminal" (text holder) and on button click client would send info to RPI with Your command in text holder. | unknown | |
d8618 | train | print(2, 3.0f);
Could be both print(int, float) and print(float, double) since implicit type conversions are done in the backgound. An int can be converted to a float. Javac (or the compiler) cannot know for sure which one you meant.
If you want to choose for your self you can add casts:
print((float) 2, (float) 3.0f);
(Note that the second cast (float => float) isn't necessary.) | unknown | |
d8619 | train | The values in your writer.writerow() will not be defined if an element is missing. You could just define some default values to avoid this.
Try adding the following after the try statement:
noteat, notetext, responsibilities, certaintytag, certaintyat, certaintytext = [''] * 6
You could of course have 'NA' if preferred. | unknown | |
d8620 | train | typedef unsigned char byte;
This is unreadable. Consider including <stdint.h> and using uint8_t.
My problem is that, whenever I call this function ma_init(); I get a segmentation fault on footer->status = FREE;
Please learn to compile with all warnings & debug info (e.g. gcc -Wall -Wextra -g with GCC...) then use the debugger (gdb) ...
You are initializing header to the address of mem_pool.
Then you do some questionable (that is wrong) pointer arithmetic
mem_chunk_header * footer = header + header->size + sizeof(mem_chunk_header);
// the + above are likely to be wrong
You are adding to a pointer header so the + is in terms of pointed element size (not bytes), that is in units of sizeof(mem_chunk_header) which certainly is at least 2 and probably more (on my Linux/x86-64 desktop it is 8). Your footer is far away.
With a debugger, you would have noticed that (by querying the values of header, footer, mem_pool). Consider also using valgrind
BTW, if you are coding a memory allocator à la malloc you'll better base it on operating system specific primitives (generally system calls) modifying your virtual address space. On Linux you would use mmap(2) and friends.
points me into the right direction here cause after some googling for some hours
You need to spend weeks reading some good C programming book. Hours of work are not enough in your case. | unknown | |
d8621 | train | You need a GtkTextView which you can set to be not editable. I suggest you look at this excellent GTK tutorial which explains what widgets are available in GTK and how to put them together, accompanied by lots of example code. | unknown | |
d8622 | train | A lot of the code within Delphi depends on the width of scrollbars to be the fixed system setting so you can't alter the width without breaking the control. (Not without rewriting the TControlScrollBar and related controls in the VCL.)
You could, of course, hide the default scrollbars of the control and add your own TScrollbar components next to it.The standard TScrollBar class is a WinControl itself, where the scrollbar is taking the whole width and height of the control. The TControlScrollBar class is linked to other WinControl to manage the default scrollbars that are assigned to Windowed controls. While the raw API could make it possible to use a more flexible width, you'd always have the problem that the VCL will just assume the default system-defined width for these controls.
This also shows the biggest difference between both scrollbar types: TScrollBar has it's own Windows handle while TControlScrollBar borrows it from the related control.
A: You can try something like this:
your_frame.HorzScrollBar.Size := 50;
your_frame.HorzScrollBar.ButtonSize := your_frame.HorzScrollBar.Size;
A: procedure TForm1.FormCreate(Sender: TObject);
var NCMet: TNonClientMetrics;
begin
FillChar(NCMet, SizeOf(NCMet), 0);
NCMet.cbSize:=SizeOf(NCMet);
// get the current metrics
SystemParametersInfo(SPI_GETNONCLIENTMETRICS, SizeOf(NCMet), @NCMet, 0);
// set the new metrics
NCMet.iScrollWidth:=50;
SystemParametersInfo(SPI_SETNONCLIENTMETRICS, SizeOf(NCMet), @NCMet, SPIF_SENDCHANGE);
end; | unknown | |
d8623 | train | You can do this any way you want. There is no "one right way".
On the extreme end, you can have users submit a blood sample when they request an account. You can then check new blood samples against your database. This could result in people submitting family member's blood samples. If that's a concern, you may wish to have them sign a contract, notarized most likely, stating that they have submitted their own blood sample.
You could also have them personally register at your office. You could, for example, collect fingerprints and compare them to your database.
IP addresses are not very useful for this purpose. Some people have access to dozens of IP addresses and in some cases many people share a single IP address.
You could only give a vote to users who have made a large number of intelligent posts to a forum. That would at least cut down on the ease with which people can create voting accounts.
A: I am not sure if it can be implemented in your application but try logging for unique MAC address instead of unique IP address. MAC address are universally unique for each network controller and usually PC just has one. | unknown | |
d8624 | train | This is how I would do it. Have null-able parameters
ALTER PROCEDURE [dbo].[spUpdateProduct]
@ProductID int,
@Brand nvarchar(30) = null,
@ModelNo nvarchar(9) = null, ..... (to all the parameters except @ProductID)
AS
BEGIN
SET NOCOUNT ON
UPDATE tblProduct
SET
Brand = isNull(@Brand, Brand),
ModelNo = isNull(@ModelNo, ModelNo),
[Description] = isNull(@Description, Description),...
WHERE ProductID = @ProductID
END
Basically you are only updating the fields if parameters are not null otherwise keeping old values.
A: You have two problems - (a) the parameter to the stored procedure must be provided (the error tells you this) and (b) what to do when the parameter is not provided. For the first problem, check the pass null value or use SQL query to execute stored procedure like so:
exec spUpdateProduct 1, null, null, null, 140.99, null, null
For problem (b), use coalesce to update based on value being passed:
ALTER PROCEDURE [dbo].[spUpdateProduct]
@ProductID int, @Brand nvarchar(30), @ModelNo nvarchar(9), @Description
nvarchar(50), @CostPrice decimal(6,2), @Stock int, @SalePrice decimal(6,2)
AS
BEGIN
SET NOCOUNT ON
UPDATE t
SET
Brand = coalesce(@Brand, t.Brand),
ModelNo = coalesce(@ModelNo, t.ModelNo),
[Description] = coalesce(@Description, t.Description),
CostPrice = coalesce(@CostPrice, t.CostPrice),
Stock = coalesce(@Stock, t.Stock),
SalePrice = coalesce(@SalePrice, t.SalePrice)
FROM tblProduct as t
WHERE ProductID = @ProductID
END
A: After spending some time pondering this, I came up with a solution that doesn't use dynamic sql and also solves issues that coalese and isNull approaches don't solve. Note: minimal testing... looks like it does add a little overhead... but having this ability might allow flexibility on your client side code.
Table Schema:
FirstName NVARCHAR(200)
LastName NVARCHAR(200)
BirthDate DATE NULL
Activated BIT
Xml Param:
DECLARE @param =
N'<FirstName>Foo</FirstName>
<LastName>Bar</LastName>
<BirthDate>1999-1-1</BirthDate>
<Activated>1</Activated>'
UPDATE [dbo].[UserMaster]
[FirstName] = case when @param.exist('/FirstName') = 1 then @param.value('/FirstName[1]','nvarchar(200)') else [FirstName] end,
[LastName] = case when @param.exist('/LastName') = 1 then @param.value('/LastName[1]','nvarchar(200)') else [LastName] end,
[BirthDate] = case when @param.exist('/BirthDate') = 1 then @param.value('/BirthDate[1]','date') else [BirthDate]end,
[Activated] = case when @param.exist('/Activated') = 1 then @param.value('/Activated[1]','bit') else [Activated] end
WHERE [UserMasterKey] = @param.value('/UserMasterKey[1]','bigint')
This will allow you to pass in an XML parameter, with any field you want and Update that field. Note that setting fields equal to themselves in this context, for example [FirstName] = [FirstName], is perfectly fine. The engine will optimize this away.
Now, this is a more complex solution to @Anand and @r.net 's answers, however it does solve the 'setting nullables' problem that some commenters pointed out.
I liked how elegant their solutions were, but wanted to be able to set NULL on nullable columns, while still retaining the ability to use the default field value if I don’t pass in a parameter for that field. The using null checks as the basis for defaults in this case fills the hole and digs another one, so to speak, because they don't allow one to explicitly update a field to null.
Passing an xml parameter allows you to do that, all without taking a dynamic sql approach.
I would also look into using Table-valued parameters, as I believe there could be a similar solution using those. | unknown | |
d8625 | train | This is actually part of the JAX-WS spec. You can do
@Resource
WebServiceContext ctx;
....
ctx.getMessageContext().get(MessageContext.SERVLET_REQUEST)
to get the ServletRequest object from which you can do anything with the session or whatever.
Note: by default, JAX-WS clients don't maintain the session cookie. You have to set them to maintain the session:
((BindingProvider)proxy).getRequestContext()
.put(BindingProvider.SESSION_MAINTAIN_PROPERTY, "true"); | unknown | |
d8626 | train | Here is an approach for you:
first="$1"
last="${@: -1}"
echo "first: $first"
echo "last: $last"
sum=$((first + last))
echo "The sum of the two parameters are $sum"
You can run like this:
./program.sh 1 2 3 4
first: 1
last: 4
The sum of the two parameters are 5 | unknown | |
d8627 | train | Configure this /usr/share/grafana/conf/defaults.ini file as the following:
[smtp]
enabled = true
host = smtp.gmail.com:587
user = [email protected]
password = """Your_Password"""
cert_file =
key_file =
skip_verify = true
from_address = [email protected]
from_name = Your_Name
ehlo_identity =
In this example, I set my own Gmail account with its SMTP:
smtp.gmail.com with 587(TLS) port.
You Should find your SMTP email address with its port.
[NOTE]
Don't forget to put your password in password_field.
A: Mail alert grafana configuration for windows \grafana-6.4.4.windows-amd64\grafana-6.4.4\conf\defaults.ini
[smtp]
enabled = true
host = smtp.gmail.com:587
;user =
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
;cert_file =
;key_file =
skip_verify = true
from_address = your_mail_id
from_name = Grafana
;ehlo_identity = dashboard.example.com | unknown | |
d8628 | train | Download Procmon let it run and filter for you dll name. This will immediately give you the locations where the dll was searched and which access path did return 0x43.
You get even the call stacks if you have the pdbs for your code as well (C/C++ only no managed code).
A: Run the program through Dependency Walker in its Profile mode and let that fine tool tell you exactly what is going wrong. | unknown | |
d8629 | train | This is just a visual bug in Xcode 6. Whenever you copy an element with text, that text's font-size seems to visually be altered. However, when you build and run the app, it should show up normal on your device or simulator.
You can fix the visual bug by clicking on the copied element, going to the attributes inspector, and then changing the font-size down one and then back up one. | unknown | |
d8630 | train | (This answer deals with simple optimisations and Python style; it works with the existing algorithm, teaching some points of optimisation, rather than replacing it with a more efficient one.)
Here are some points to start with to make the code easier to read and understand:
*
*Iterate over sList, not over range(len(sList)). for i in range(len(sList)) becomes for i in sList and sList[i] becomes i.
*No need for that tmpRad; put it inline.
*Instead of if a: if b: if c: use if a and b and c.
Now we're at this:
filteredList = []
for i in sList:
minx = i['x'] - i['radius']
maxx = i['x'] + i['radius']
miny = i['y'] - i['radius']
maxy = i['y'] + i['radius']
minz = i['z'] - i['radius']
maxz = i['z'] + i['radius']
for j in nList:
if minx <= j['x'] <= maxx and miny <= j['y'] <= maxy and minz <= j['z'] <= maxz and findRadius(i,j) <= i['radius']:
filteredList.append(int(j['num']))
(PEP 8 would recommend splitting that long line to lines of no more than 80 characters; PEP 8 would also recommend filtered_list and s_list and n_list rather than filteredList, sList and nList.)
I've put the findRadius(i, j) <= i['radius'] first for style and because it looks like it might be more likely to evaluate to false, speeding up calculations. Then I've also inlined the minx etc. variables:
filteredList = []
for i in sList:
for j in nList:
if findRadius(i, j) <= i['radius'] \
and i['x'] - i['radius'] <= j['x'] <= i['x'] + i['radius'] \
and i['y'] - i['radius'] <= j['y'] <= i['y'] + i['radius'] \
and i['z'] - i['radius'] <= j['z'] <= i['z'] + i['radius']:
filteredList.append(int(j['num']))
One thing to think about is that i['x'] - i['radius'] <= j['x'] <= i['x'] + i['radius'] could be simplified; try things like subtracting i['x'] from all three parts.
You can shorten this even more with a list comprehension.
filteredList = [int(j['num']) for j in nList for i in sList
if findRadius(i, j) <= i['radius']
and i['x'] - i['radius'] <= j['x'] <= i['x'] + i['radius']
and i['y'] - i['radius'] <= j['y'] <= i['y'] + i['radius']
and i['z'] - i['radius'] <= j['z'] <= i['z'] + i['radius']]
And finally, named tuples (this has the side-effect of making them immutable, too, which is probably desired? Also note it's Python 2.6 only, read the page for how you could do it with older versions of Python):
from collections import namedtuple
node = namedtuple('node', 'x y z num')
sphere = namedtuple('sphere', 'x y z radius')
nList = [
node(x=0.0, y=0.0, z=0.0, num=1.0),
node(x=1.0, y=0.0, z=0.0, num=2.0),
node(x=2.0, y=0.0, z=0.0, num=3.0),
node(x=3.0, y=0.0, z=0.0, num=4.0),
node(x=4.0, y=0.0, z=0.0, num=5.0),
node(x=5.0, y=0.0, z=0.0, num=6.0),
node(x=6.0, y=0.0, z=0.0, num=7.0),
node(x=7.0, y=0.0, z=0.0, num=8.0),
node(x=8.0, y=0.0, z=0.0, num=9.0),
node(x=9.0, y=0.0, z=0.0, num=10.0)]
sList = [
sphere(x=25.0, y=18.0, z=26.0, radius=0.0056470000000000001),
sphere(x=23.0, y=29.0, z=45.0, radius=0.0066280000000000002),
sphere(x=29.0, y=46.0, z=13.0, radius=0.014350999999999999),
sphere(x=20.0, y=0.0, z=25.0, radius=0.014866000000000001),
sphere(x=31.0, y=27.0, z=18.0, radius=0.018311999999999998),
sphere(x=36.0, y=10.0, z=46.0, radius=0.024702000000000002),
sphere(x=13.0, y=27.0, z=48.0, radius=0.027300999999999999),
sphere(x=1.0, y=14.0, z=13.0, radius=0.033889000000000002),
sphere(x=20.0, y=31.0, z=11.0, radius=0.034118999999999997),
sphere(x=28.0, y=23.0, z=8.0, radius=0.036683)]
Then, instead of sphere['radius'] you can do sphere.radius. This makes the code neater:
filteredList = [int(j.num) for j in nList for i in sList
if findRadius(i, j) <= i.radius
and i.x - i.radius <= j.x <= i.x + i.radius
and i.y - i.radius <= j.y <= i.y + i.radius
and i.z - i.radius <= j.z <= i.z + i.radius]
Or, without the list comprehension,
filteredList = []
for i in sList:
for j in nList:
if findRadius(i, j) <= i.radius \
and i.x - i.radius <= j.x <= i.x + i.radius \
and i.y - i.radius <= j.y <= i.y + i.radius \
and i.z - i.radius <= j.z <= i.z + i.radius:
filteredList.append(int(j.num))
Finally, choose nicer names; [style changed slightly as per comments, putting findRadius at the end as it's more likely to be computationally expensive - you're the best judge of that, though]
filteredList = [int(n.num) for n in nodes for s in spheres
if s.x - s.radius <= n.x <= s.x + s.radius and
s.y - s.radius <= n.y <= s.y + s.radius and
s.z - s.radius <= n.z <= s.z + s.radius and
findRadius(s, n) <= s.radius]
Or,
filteredList = []
for s in spheres:
for n in nodes:
if (s.x - s.radius <= n.x <= s.x + s.radius and
s.y - s.radius <= n.y <= s.y + s.radius and
s.z - s.radius <= n.z <= s.z + s.radius and
findRadius(s, n) <= s.radius):
filteredList.append(int(n.num))
(You could put srad = s.radius in the outer loop for a probable slight performance gain if desired.)
A: one we can remove from the sample
unless you need to iterate over a list by index, one shouldn't, also avoid using range, and merge ifs together
filteredList = []
for a in sList:
minx = (a['x']) - (a['radius'])
maxx = (a['x']) + (a['radius'])
miny = (a['y']) - (a['radius'])
maxy = (a['y']) + (a['radius'])
minz = (a['z']) - (a['radius'])
maxz = (a['z']) + (a['radius'])
for b in nList:
if minx <= b['x'] <= maxx and miny <= b['y'] <= maxy and minz <= b['z'] <= maxz:
tmpRad = findRadius(a,b)
if tmpRad <= a['radius']:
filteredList.append(int(b['num']))
A: First off, Python isn't built for that kind of iteration. Using indices to get at each element of a list is backwards, a kind of brain-damage that's taught by low-level languages where it's faster. In Python it's actually slower. range(len(whatever)) actually creates a new list of numbers, and then you work with the numbers that are handed to you from that list. What you really want to do is just work with objects that are handed to you from whatever.
While we're at it, we can pull out the common s['radius'] bit that is checked several times, and put all the if-checks for the bounding box on one line. Oh, and we don't need a separate 'tmpRad', and I assume the 'num's are already ints and don't need to be converted (if they do, why? Why not just have them converted ahead of time?)
None of this will make a huge difference, but it at least makes it easier to read, and definitely doesn't hurt.
filteredList = []
for s in sList:
radius = s['radius']
minx = s['x'] - radius
maxx = s['x'] + radius
miny = s['y'] - radius
maxy = s['y'] + radius
minz = s['z'] - radius
maxz = s['z'] + radius
for n in nList:
if (minx <= n['x'] <= maxx) and (miny <= n['y'] <= maxy) and \
(minz <= n['z'] <= maxz) and (findRadius(s, n) <= radius):
filteredList.append(n['num'])
Now it's at least clear what's going on.
However, for the scale of the problem we're working with, it sounds like we're going to need algorithmic improvements. What you probably want to do here is use some kind of BSP (binary space partitioning) technique. The way this works is:
*
*First, we rearrange the nList into a tree. We cut it up into 8 smaller lists, based on whether x > 0, whether y > 0 and whether z > 0 for each point (8 combinations of the 3 boolean results). Then each of those gets cut into 8 again, using the same sort of criteria - e.g. if the possible range for x/y/z is -10..10, then we cut the "x > 0, y > 0, z > 0" list up according to whether x > 5, y > 5, z > 5, etc. You get the idea.
*For each point in the sList, we check whether minx > 0, etc. The beautiful part: if minx > 0, we don't have to check any of the 'x < 0' lists, and if maxx < 0, we don't have to check any of the 'x > 0' lists. And so forth. We figure out which of the 8 "octants" of the space the bounding box intersects with; and for each of those, we recursively check the appropriate octants of those octants, etc. until we get to the leaves of the tree, and then we do the normal point-in-bounding-box, then point-in-sphere tests.
A: Actually, you could save all that by:
filteredList = [int(node['num']) for sphere in sList \
for node in nList if findRadius(sphere,node)<=sphere['radius']]
If the distance from a point to a sphere's globe is less than the sphere's radius, then I guess we can say it is in the sphere, right?
I assume findRadius is defined like:
def findRadius(sphere,node):
return ((node['x']-sphere['x'])**2 + \
(node['y']-sphere['y'])**2 + \
(node['z']-sphere['z'])**2)**.5
A: (AFAICT, the following solution is algorithmically faster than any other answer posted so far: approximately O(N log N) vs O(N²). Caveat: this assumes that you don't have massive amounts of overlap between bounding boxes.)
If you are allowed to pre-compute an index structure:
*
*Push all the min/max x values into a set and sort them, thus creating a list of vertical regions spanning the x-axis. Associate each region with the set of bounding boxes that contain it.
*Repeat this procedure for min/max y values, to create a list of horizontal regions, and associate each region with the set of bounding boxes it contains.
*For each point being tested:
*
*Use a binary chop to find the horizontal region that contains the point's x coordinate. What you really want, though, is the set of bounding boxes associated with the region.
*Likewise, find the set of bounding boxes associated with the y coordinate.
*Find the intersection of these two sets.
*Test the bounding boxes in this residue set using Pythagoras.
A: Taking in all this advice, I managed to come up with a solution that was about 50x faster than the original.
I realized that the bottleneck was in the datatype (list of dicts) I was using. Looping over multiple lists was incredibly slow in my cast and using sets was much more efficient.
First thing I did was to implement named tuples. I knew how my list of nodes was numbered which provided the hash I needed for efficiency.
def findNodesInSpheres(sList,nList,nx,ny,nz):
print "Running findNodesInSpheres"
filteredList = []
for a in sList:
rad = a.radius
minx = (a.x) - (rad) if (a.x - rad > 0) else 0
maxx = (a.x) + (rad) if (a.x + rad < nx ) else nx
miny = (a.y) - (rad) if (a.y - rad > 0) else 0
maxy = (a.y) + (rad) if (a.y + rad < ny ) else ny
minz = (a.z) - (rad) if (a.z - rad > 0) else 0
maxz = (a.z) + (rad) if (a.z + rad < nz ) else nz
boundingBox = set([ (i + j * (nx + 1) + k * (nx + 1) * (ny + 1)) for i in range (int(minx),int(maxx)+1)
for j in range (int(miny),int(maxy)+1) for k in range(int(minz),int(maxz)+1) ])
for b in sorted(boundingBox):
if findRadius(a,nList[b]) <= rad:
filteredList.append(nList[b].num)
return filteredList
Using set() instead of list provided massive speedups. The larger the data set (nx, ny, nz), the more the speedup.
It could still be improved using tree implementation and domain decomposition as has been suggested, but for the moment it works.
Thanks to everyone for the advice! | unknown | |
d8631 | train | make a helper for your view... put it in ApplicationController
helper_method :formatted_date
def formatted_date(item_date)
if (Date.today - item_date).abs < 6
item_date.strftime('%A')
else
item_date.strftime('%Y/%m/%d')
end
end
Then in your view, instead of showing the object_date field show formatted_date(object_date) | unknown | |
d8632 | train | Use Jquery to change the date according dropdowns on change of month.
myjsonarray is array as following,
{ "january":{"1","2","3" .... "31"}, "February":{"1","2"..."29"}, ... }
According to the months ...
jQuery(document).ready(function(){
prevhtml = jQuery("#dobday").html();
});
function changedays(){
var selected = jQuery("#dobmonth option:selected").text();
if(myjsonarray[selected]){
jQuery.each(myjsonarray[selected], function( index, value ) {
html +="<option value=\""+index+"\">"+value+"</option>";
});
}
if(!myjsonarray[selected]) html=prevhtml;
jQuery("#dobday").html(html);
}
Also change your select statement as
<select name="dobmonth" id="dobmonth" on-change="changedays()">
A: you can use HTML5's required attribute:
<input type=text required/>
<select required></select>
Although it's not fully supported by some older browsers
FIDDLE
UPDATE
You need to add required and for selects you need to set the first option to a blank value like so:
<option value="">Month</option>
NEW FIDDLE | unknown | |
d8633 | train | np.random.seed([3,1415])
s = pd.Series(np.random.choice(list('ABCDEFGHIJ'), 1000, p=np.arange(1, 11) / 55.))
s.value_counts()
I 176
J 167
H 136
F 128
G 111
E 85
D 83
C 52
B 38
A 24
dtype: int64
As percent
s.value_counts(normalize=True)
I 0.176
J 0.167
H 0.136
F 0.128
G 0.111
E 0.085
D 0.083
C 0.052
B 0.038
A 0.024
dtype: float64
counts = s.value_counts()
percent = counts / counts.sum()
fmt = '{:.1%}'.format
pd.DataFrame({'counts': counts, 'per': percent.map(fmt)})
counts per
I 176 17.6%
J 167 16.7%
H 136 13.6%
F 128 12.8%
G 111 11.1%
E 85 8.5%
D 83 8.3%
C 52 5.2%
B 38 3.8%
A 24 2.4%
A: I think you need:
#if output is Series, convert it to DataFrame
df = df.rename('a').to_frame()
df['per'] = (df.a * 100 / df.a.sum()).round(1).astype(str) + '%'
print (df)
a per
1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
Timings:
It seems faster is use sum as twice value_counts:
In [184]: %timeit (jez(s))
10 loops, best of 3: 38.9 ms per loop
In [185]: %timeit (pir(s))
10 loops, best of 3: 76 ms per loop
Code for timings:
np.random.seed([3,1415])
s = pd.Series(np.random.choice(list('ABCDEFGHIJ'), 1000, p=np.arange(1, 11) / 55.))
s = pd.concat([s]*1000)#.reset_index(drop=True)
def jez(s):
df = s.value_counts()
df = df.rename('a').to_frame()
df['per'] = (df.a * 100 / df.a.sum()).round(1).astype(str) + '%'
return df
def pir(s):
return pd.DataFrame({'a':s.value_counts(),
'per':s.value_counts(normalize=True).mul(100).round(1).astype(str) + '%'})
print (jez(s))
print (pir(s))
A: Here's a more pythonic snippet than what is proposed above I think
def aspercent(column,decimals=2):
assert decimals >= 0
return (round(column*100,decimals).astype(str) + "%")
aspercent(df['mark'].value_counts(normalize=True),decimals=1)
This will output:
1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
This also allows to adjust the number of decimals
A: Create two series, first one with absolute values and a second one with percentages, and concatenate them:
import pandas
d = {'mark': ['pos', 'pos', 'pos', 'pos', 'pos',
'neg', 'neg', 'neg', 'neg',
'neutral', 'neutral' ]}
df = pd.DataFrame(d)
absolute = df['mark'].value_counts(normalize=False)
absolute.name = 'value'
percentage = df['mark'].value_counts(normalize=True)
percentage.name = '%'
percentage = (percentage*100).round(2)
pd.concat([absolute, percentage], axis=1)
Output:
value %
pos 5 45.45
neg 4 36.36
neutral 2 18.18 | unknown | |
d8634 | train | regex_match
Determines if the regular expression e matches the entire target character sequence, which may be specified as std::string, a C-string, or an iterator pair.
You need to use regex_search
Determines if there is a match between the regular expression e and some subsequence in the target character sequence.
Also you can use regex_iterator, example from here:
string text = "sp_call('%1','%2','%a');";
std::regex regexp("%[0-9]");
auto words_begin =
std::sregex_iterator(text.begin(), text.end(), regexp);
auto words_end = std::sregex_iterator();
std::cout << "Found "
<< std::distance(words_begin, words_end)
<< " words:\n";
for (std::sregex_iterator i = words_begin; i != words_end; ++i) {
std::smatch match = *i;
std::string match_str = match.str();
std::cout << match_str << '\n';
}
Found 2 words:
%1
%2 | unknown | |
d8635 | train | The mls_images.imgOrder = 0 condition should be in the join with mls_images, not mls_forms_listing_specifics.
Don't use GROUP BY if you're not using any aggregation functions. Use SELECT DISTINCT to prevent duplicates.
SELECT DISTINCT mls_subject_property.*, mls_images.imagePath, mls_forms_listing_specifics.listingspecificsListPrice
FROM mls_subject_property
LEFT JOIN mls_images ON mls_subject_property.mls_listingID = mls_images.mls_listingID AND mls_images.imgOrder = 0
LEFT JOIN mls_forms_listing_specifics ON mls_forms_listing_specifics.mls_listingID = mls_subject_property.mls_listingID
WHERE userID = 413 | unknown | |
d8636 | train | @stream
I have not tried Woorea but i know a lot many developers are using Jclouds, the link http://developer.rackspace.com/#home-sdks has well documented guide with example how to use the Java SDK.
Hope it helps.
A: looks like you can build SWIFT independently (part of woorea peoject)
as it states in the readme file here:
(com.woorea swift-client 3.0.0-SNAPSHOT)
https://github.com/woorea/openstack-java-sdk
the Maven artifact ID should be:
openstack-java-sdk
Here is a nice toturial that can be of hand:
https://github.com/woorea/openstack-java-sdk/wiki/Swift-Tutorial
it has the example for the java api for using SWIFT,
for example, this code snippet (more details in the link):
Properties properties = System.getProperties();
properties.put("verbose", "true");
properties.put("auth.credentials", "passwordCredentials");
properties.put("auth.username", "demo");
properties.put("auth.password", "secret0");
properties.put("auth.tenantName", "demo");
properties.put("identity.endpoint.publicURL","http://192.168.1.43:5000/v2.0");
OpenStackClient openstack = OpenStackClient.authenticate(properties);
AccountResource account = openstack.getStorageEndpoint();
account.container("hellocontainer").put();
account.container("hellocontainer").object("dir1").put();
account.container("hellocontainer").object("test1")
.put(new File("pom.xml"), new SwiftStorageObjectProperties() {{
setContentType("application/xml");
getCustomProperties().putAll(new HashMap<String, String>() {{
put("customkey.1", "customvalue.1");
}});
}});
List<SwiftStorageObject> objects = account.container("hellocontainer").get();
*
*just keep in mind that when using openstack's API you will most likely need to authenticate (get tokens etc..) so that you will need the Keystone lib as well
www.programcreek.com/java-api-examples/index.php?api=com.woorea.openstack.keystone.Keystone
hope this helps. | unknown | |
d8637 | train | It started working when I upgraded my spring boot version from 1.3 to 1.4 | unknown | |
d8638 | train | Found out how to do it:
<head>
<script type="text/javascript" src="TinyMCE/tinymce.min.js"></script>
<script type="text/javascript">
tinyMCE.init({
plugins: [ 'fullscreen' ],
setup: function(editor) {
editor.on('init', function(e) {
editor.execCommand('mceFullScreen');
});
}
});
</script>
</head>
<body>
<textarea id="editor"></textarea>
</body>
A: It is described in the official documentation too.
tinymce.activeEditor.execCommand('mceFullScreen');
be sure you have included it in the plugins like this
tinymce.init({
selector: 'textarea', // change this value according to your HTML
plugins: 'fullscreen',
menubar: 'view',
toolbar: 'fullscreen'
});
here is a link to the official documentation.
Official docs for tinymce editor fullscreen | unknown | |
d8639 | train | Number of possibilities:
(From Number of submatrix of size AxB in a matrix of size MxN)
In a matrix of size (m*n), there are (n-A+1)*(m-B+1) different matrices of size (A*B).
So the total number of possible input for your function is sum((n-A+1)*(m-B+1)) where A=1..n and B=1..m.
EDIT:
This is getting so huge when m=1000.
O(m^2n^2) is O(1000^4)... 1 trillion ... this won't fit in my small computer's memory :)
Structure:
I propose you build an hashmap that you simply index with the boundaries of your matrix:
A string built from a=M(i,j), b=M(k,l) where 0< i < k <(n+1) and 0< j < l <(m+1)
*
*e.g. you can build is so: aHashMap("["+i+","+j+"].["+k+","+l+"]")
Pre-computation:
*
*Have a function that calculates the max of a given matrix (a[i,j],b[k,l]) - say myMax(i,j,k,l). I assume there is no point showing you how.
*Then its easy (sorry I can't easily compile anything so I give only the principle for now):
for i=1 to n-1 do
for j=1 to m-1 do
for k=i to n do
for l=j to m do
aHashMap("["+i+","+j+"].["+k+","+l+"]") = myMax(i,j,k,l)
next
next
next
next
This is O(n^4), but assuming its pre-computational, there's no point, it just make your program bigger and bigger when storing aHashMap.
Good to know
Such problem also seem to be widely addressed at http://cs.stackexchange.com ; e.g. this or that... so this SE could be of interest to the OP.
Implementation of this naive approach:
With 99 x 95 it already gives millions of possibilities to pre-compute, taking about 3Go RAM!
$ ./p1
enter number of rows:99
enter number of cols:95
pre-computing...
hashmap ready with 22572000 entries.
matrix.h
#ifndef myMatrix_JCHO
#define myMatrix_JCHO
typedef unsigned int u_it;
class Matrix
{
public:
Matrix(u_it _n, u_it _m);
Matrix(const Matrix& matr);
Matrix& operator=(const Matrix& matr);
~Matrix();
u_it getNumRows() ;
u_it getNumCols() ;
int getC(u_it i, u_it j);
void setC(u_it i, u_it j, int v);
void printMatrix();
int maxSub(u_it a_i, u_it a_j, u_it b_k, u_it b_l);
private:
u_it n, m;
int **pC;
};
#endif
matrix.cpp
#include <iostream>
#include <string>
#include <sstream>
#include "matrix.h"
Matrix::Matrix(u_it _n, u_it _m) {
n=_n;
m=_m;
int k=0;
pC = new int*[n];
for (u_it i=0; i<n; ++i){
pC[i]=new int[m];
for(u_it j=0; j<m; ++j){
pC[i][j]=++k;
}
}
}
Matrix::~Matrix(){
for (u_it i=0; i<n; ++i){
delete [] pC[i];
}
delete [] pC;
std::cout << "matrix destroyed\n";
}
u_it Matrix::getNumRows() {
return n;
}
u_it Matrix::getNumCols() {
return m;
}
int Matrix::getC(u_it i, u_it j){
return pC[i][j];
}
void Matrix::setC(u_it i, u_it j, int v){
pC[i][j]=v;
}
void Matrix::printMatrix(){
for (u_it i=0; i<n; ++i){
std::cout << "row " <<i<<" [ ";
for(u_it j=0; j<m; ++j){
std::cout << pC[i][j] << '\t';
}
std::cout << "]\n";
}
}
// Return max of submatrix a(i,j); b(k,l)
int Matrix::maxSub(u_it a_i, u_it a_j, u_it b_k, u_it b_l) {
int res = -100000;
if (a_i<=b_k && a_j<=b_l && b_k<n && b_l<m) {
for (u_it i=a_i; i<=b_k; ++i){
for(u_it j=a_j; j<=b_l; ++j){
res= (pC[i][j]>res)? pC[i][j] : res;
}
}
} else {
std::cout << "invalid arguments: out of bounds\n";
return -100000;
}
return res;
}
main.cpp
#include <iostream>
#include <string>
#include <sstream>
#include <map>
#include <cassert>
#include "matrix.h"
std::string hashKey(u_it a_i, u_it a_j, u_it b_k, u_it b_l) {
std::stringstream ss;
ss << "max(a[" << a_i << "," << a_j << "],b[" << b_k << "," << b_l << "]";
return ss.str();
}
int main() {
u_it n_rows, n_cols;
std::cout << " enter number of rows:";
std::cin >> n_rows;
std::cout << " enter number of cols:";
std::cin >> n_cols;
std::cout << " pre-computing...\n";
std::map<std::string, int> myHMap;
Matrix * mat=new Matrix(n_rows,n_cols);
//mat->printMatrix();
// "PRE" computation
for (u_it i=0; i<n_rows; ++i) {
for (u_it j=0; j<n_cols; ++j) {
for (u_it k=i; k<n_rows; ++k) {
for (u_it l=j; l<n_cols; ++l) {
//std::cout <<"max(a["<<i<<","<<j<<"],b["<<k<<","<<l<<"]"<< mat->maxSub(i, j, k, l) <<'\n';
//std::cout << mat->hashKey(i, j, k ,l) <<" -> " << mat->maxSub(i, j, k, l) <<'\n';
myHMap[hashKey(i, j, k ,l)] = mat->maxSub(i, j, k, l);
}
}
}
}
std::cout << " hashmap ready with "<< myHMap.size() <<" entries.\n";
// call to values
u_it cw_i, cw_j, cw_k, cw_l;
cw_i=0;
std::string hKey;
while (cw_i < n_rows+1) {
std::cout << " enter i,:";
std::cin >> cw_i;
std::cout << " enter j,:";
std::cin >> cw_j;
std::cout << " enter k,:";
std::cin >> cw_k;
std::cout << " enter l:";
std::cin >> cw_l;
hKey = hashKey(cw_i, cw_j, cw_k, cw_l);
std::map<std::string, int>::iterator i = myHMap.find(hKey);
assert(i != myHMap.end());
std::cout << i->first <<" -> " << i->second <<'\n';
}
}
make
g++ -std=c++0x -std=c++0x -Wall -c -g matrix.cpp
g++ -std=c++0x -std=c++0x -Wall -c -g main.cpp
g++ -std=c++0x -std=c++0x -Wall -g matrix.o main.o -o p1
A: I found the answer to have it compute in O(mn) - still with a little pre-computation - but this time that last one is easily O(mn.log(mn)) too: its just ordering a list of all the matrix's values.
Pre-compute:
First step is to simply build an orderly structure of the matrix's values, say M(A), then use <algorithm>std::sort to order that structure.
Retreive max of any sub-matrix (a,b):
To get the max of any matrix, just start with the biggest from pre-computed structure M(A) and check if it is within (a,b).
*
*If it is, then you're done
*Else, take the next one in M(A)
matrix.h
#ifndef myMatrix_JCHO
#define myMatrix_JCHO
typedef unsigned int u_it;
typedef std::pair<u_it, u_it> uup;
class Matrix
{
public:
Matrix(u_it _n, u_it _m);
Matrix(const Matrix& matr);
Matrix& operator=(const Matrix& matr);
~Matrix();
u_it getNumRows() ;
u_it getNumCols() ;
int getC(u_it i, u_it j);
void setC(u_it i, u_it j, int v);
void printMatrix();
int maxSub(u_it a_i, u_it a_j, u_it b_k, u_it b_l);
private:
u_it n, m;
int **pC;
};
#endif
matrix.cpp
#include <iostream>
#include <string>
#include <sstream>
#include "matrix.h"
Matrix::Matrix(u_it _n, u_it _m) {
n=_n;
m=_m;
//int k=0;
pC = new int*[n];
for (u_it i=0; i<n; ++i){
pC[i]=new int[m];
for(u_it j=0; j<m; ++j){
pC[i][j]=rand()%1000;
}
}
}
Matrix::~Matrix(){
for (u_it i=0; i<n; ++i){
delete [] pC[i];
}
delete [] pC;
std::cout << "matrix destroyed\n";
}
u_it Matrix::getNumRows() {
return n;
}
u_it Matrix::getNumCols() {
return m;
}
int Matrix::getC(u_it i, u_it j){
return pC[i][j];
}
void Matrix::setC(u_it i, u_it j, int v){
pC[i][j]=v;
}
void Matrix::printMatrix(){
for (u_it i=0; i<n; ++i){
std::cout << "row " <<i<<" [ ";
for(u_it j=0; j<m; ++j){
std::cout << pC[i][j] << '\t';
}
std::cout << "]\n";
}
}
main.cpp
#include <iostream>
#include <string>
#include <utility>
#include <algorithm>
#include <vector>
#include "matrix.h"
// sort function for my vector of pair:
bool oMyV(std::pair<uup, int> x, std::pair<uup, int> y) { return (x.second > y.second); }
// check that p is within matrix formed by a and b
bool isIn_a_b(uup p, uup a, uup b){
bool res = false;
if (p.first >= a.first && p.first <= b.first) {
if (p.second >= a.second && p.second <= b.second) {
res = true;
}
}
return res;
}
int main() {
u_it n_rows, n_cols;
std::cout << " enter number of rows:";
std::cin >> n_rows;
std::cout << " enter number of cols:";
std::cin >> n_cols;
std::cout << " pre-computing...\n";
std::pair<uup, int> *ps;
std::vector<std::pair<uup, int> > myV;
Matrix * mat=new Matrix(n_rows,n_cols);
// print to debug:
mat->printMatrix();
// "PRE" computation
for (u_it i=0; i<n_rows; ++i) {
for (u_it j=0; j<n_cols; ++j) {
ps=new std::pair<uup, int>(std::make_pair(i,j), mat->getC(i,j));
myV.push_back(*ps);
}
}
std::sort(myV.begin(), myV.end(), oMyV);
/* in case you want to print ordered valuet ordered valuess for debug */
for (std::vector<std::pair<uup, int> >::iterator it=myV.begin(); it!=myV.end(); ++it) {
std::cout << it->second << " at [" << it->first.first <<','<<it->first.second<< "]\n";
}
/**/
// call to values
bool byebye=false;
uup a, b;
do {
std::cout << " enter i,:"; std::cin >> a.first;
std::cout << " enter j,:"; std::cin >> a.second;
std::cout << " enter k,:"; std::cin >> b.first;
std::cout << " enter l,:"; std::cin >> b.second;
std::vector<std::pair<uup, int> >::iterator it=myV.begin();
std::cout << " a:["<<a.first<<','<<a.second<<"]-b:["<<b.first<<','<<b.second<<"] in ";
std::cout << " M:[0,0]--:["<<n_rows-1<<','<<n_cols-1<<"]\n";
// check validity:
if ( isIn_a_b(a, std::make_pair(0,0), std::make_pair(n_rows-1, n_cols-1) )
&& isIn_a_b(b, std::make_pair(0,0), std::make_pair(n_rows-1, n_cols-1) )
&& (a.first <= b.first)
&& (a.second <= b.second)
) {
while (! isIn_a_b(it->first, a, b) && it!=myV.end()){
++it;
}
std::cout << "Found:" << it->second << " at [" << it->first.first <<','<<it->first.second<< "]\n";
} else {
std::cout << "makes no sense. bye.\n";
byebye=true;
}
} while (!byebye);
}
Makefile
(don't forget: tabulate in Makefile)
OBJS = matrix.o main.o
CC = g++ -std=c++0x
DEBUG = -g
CFLAGS = -std=c++0x -Wall -c $(DEBUG)
LFLAGS = -std=c++0x -Wall $(DEBUG)
TARFILE = ${HOME}/jcho/good/matrix.tar
p1 : $(OBJS)
$(CC) $(LFLAGS) $(OBJS) -o p1
matrix.o: matrix.cpp matrix.h
$(CC) $(CFLAGS) matrix.cpp
main.o: main.cpp matrix.h
$(CC) $(CFLAGS) main.cpp
clean:
\rm -f *.o *~ p1
tar:
tar cfv $(TARFILE) *.h *.cpp Makefile \
p1 && \
echo "tar $(TARFILE) created successfuly." | unknown | |
d8640 | train | Add cucumber-jvm-deps-1.0.3.jar file into your build path. You can download cucumber-jvm-deps-1.0.3.jar file from cucumber-jvm-deps-1.0.3
A: If the NoClassDefFoundError is coming from either XmlPullParser or dom4j/element
u need to install this Eclipse Plugin/Update:
Eclipse -> Help -> Install New Software…
http://cucumber.github.com/cucumber-eclipse/update-site | unknown | |
d8641 | train | UINavigationController *navTmp = segue.destinationViewController;
YourController * xx = ((YourController *)[navTmp topViewController]);
xx.param = value;
A: Check to see if the destinationViewController is a UINavigationController, and if it is, then get its topViewController. That way it just automatically handles either case, and it's safe.
A:
But if I do that, it means that the origin controller must know that
the destination controller IS a NavigationController. I would like to
avoid that if possible ?
One possible solution is to subclass UINavigationController such that it can accept whatever data your source controller provides and in turn passes that data on to its root view controller. That might make particular sense if you have a number of segues, some leading to nav controllers and some not, and you want to handle them all the same way.
In other words, create a UINavigationController subclass that acts like a proxy for its root controller.
A: I see two possibilities:
*
*Use -isKindOfClass and check if it's a UINavigationController
*Create a protocol whith a -rootViewController method, create two categories conforming the protocol, one on UIViewController, another on UINavigationController, implement both, the one for UIViewController should return self, the one for UINavigationController should return topViewController. Now you'll be able to drop those categories on any controller, and use [[segue destinationViewController] rootViewController] | unknown | |
d8642 | train | well.. it depends upon how a particular browser saves the state of the page..
also try using history.go() method http://www.w3schools.com/jsref/met_his_go.asp and see if the problem is solved.
A: how about resubmitting the form instead of reloading:
document.forms[0].submit();//assumed there is only one form in you page
UPDATE: This should do it:
Assuming that div below is the element you like to transfer the control to, use the scrollIntoView function:
<div id="fragment-1" name="fragment-1">
....
</div>
document.getElementById('fragment-1').scrollIntoView(); | unknown | |
d8643 | train | The difference is the double quotes. With the first code you'll end up with:
Content-Disposition: attachment; filename=Project_1_w h i t e s p a c e s.jnlp
with the second code you'll end up with:
Content-Disposition: attachment; filename="Project_1_w h i t e s p a c e s.jnlp"
What you probably want is something like:
$panel_id = 1;
$panelname = 'w h i t e s p a c e s';
$filename = sprintf('"Project_%d_%s.jnlp"', $panel_id, $panelname);
$invalid_chars = array('<', '>', '?', '"', ':', '|', '\\', '/', '*', '&');
$filename = str_replace($invalid_filenamechars, '', $filename);
$this->header('Content-Disposition: attachment; filename="'.$filename.'"');
This strips any double quotes within $filename, but then makes sure that $filename is always surrounded by double quotes.
A: RFC2616, which is the HTTP/1.1 spec, says this:
The Content-Disposition response-header field has been proposed as a
means for the origin server to suggest a default filename if the user
requests that the content is saved to a file. This usage is derived
from the definition of Content-Disposition in RFC 1806.
content-disposition = "Content-Disposition" ":"
disposition-type *( ";" disposition-parm )
disposition-type = "attachment" | disp-extension-token
disposition-parm = filename-parm | disp-extension-parm
filename-parm = "filename" "=" quoted-string
disp-extension-token = token
disp-extension-parm = token "=" ( token | quoted-string )
An example is
Content-Disposition: attachment; filename="fname.ext"
Thus, sending this:
header('Content-Disposition: attachment; filename="' . $filename . '"');
conforms to the second form (quoted-string) and should do what you expect it to - take care to only send SPACE (ASCII dec 32 / hex 20) as whitespace, not some of the other fancy whitespace characters. | unknown | |
d8644 | train | I don't think it's possible. Because of the same origin policy, you can't communicate between window 2 and window 1 and 3. So window 1 and 3 can't communicate. Unless you're using some session or cookies, but it's outside the scope of you question if I'm not mistaken. | unknown | |
d8645 | train | So the answer to this question seems to be that there isn't a way supported by Ecto to do this. @maartenvanvliet solution works nicely, with the downside of relying on internal implementation.
My solution to this problem was to have the function search_field to always search in the last joined table, using the ... syntax described here:
# Searches for the `search_term` in the `field` in the last joined table in `initial_query`.
defp search_field(initial_query, field, search_term) do
initial_query
|> or_where(
[..., t],
fragment(
"CAST(? AS varchar) ILIKE ?",
field(t, ^field),
^"%#{search_term}%"
)
)
end
So this function would be used like this:
Answer
|> join(:left, [a], q in assoc(a, :question), as: :question)
|> search_field(:text, search_text)
|> join(:left, [a, q], s in assoc(a, :survey), as: :survey)
|> search_field(:title, search_text)
Which, in my opinion, still reads nicely, with the downside of requiring that we are able to change the initial_query.
A: The trick is to retrieve the position of the named binding in the bindings. The named bindings are stored in the %Ecto.Query{aliases: aliases} field.
def named_binding_position(query, binding) do
Map.get(query.aliases, binding)
end
def search_field(query, table, field, search_term) do
position = named_binding_position(query, table)
query
|> or_where(
[{t, position}],
fragment(
"CAST(? AS varchar) ILIKE ?",
field(t, ^field),
^"%#{search_term}%"
)
)
end
We first lookup the position of the named binding in the query.aliases. Then use this position to build the query.
Now, when we call
Answer
|> join(:left, [a], q in assoc(a, :question), as: :question)
|> join(:left, [a, q], s in assoc(a, :survey), as: :survey)
|> search_field(:question, :text, "bogus")
It should yield something like
#Ecto.Query<from a in Answer,
left_join: q in assoc(a, :question), as: :question,
or_where: fragment("CAST(? AS varchar) ILIKE ?", q.text, ^"%bogus%")>
Of note is that the {t, position} tuples in %Query.aliases to refer to the position of the named binding is an internal implementation and not documented. Therefore, could be subject to change. See https://github.com/elixir-ecto/ecto/issues/2832 for more information | unknown | |
d8646 | train | The first thing to do is try getting rid of the before_create :generate_slug and before_update :generate_slug lines and replace them with
before_validation :generate_slug
Your uniqueness validation may work then. | unknown | |
d8647 | train | First creating a table having less rows and/or columns and then splitting single cells definitely is not the way to go. Instead create the table having as much rows and/or columns as maximum needed. Merging is simpler than splitting.
According to your screen shots the table needs 4 rows and 9 columns.
The following complete example creates exactly the table of your screen shots:
import java.io.File;
import java.io.FileOutputStream;
import java.math.BigInteger;
import org.apache.poi.xwpf.usermodel.XWPFDocument;
import org.apache.poi.xwpf.usermodel.XWPFTable;
import org.apache.poi.xwpf.usermodel.XWPFTableCell;
import org.apache.poi.xwpf.usermodel.XWPFParagraph;
import org.apache.poi.xwpf.usermodel.XWPFRun;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTTcPr;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTTblWidth;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.STTblWidth;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTVMerge;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.STMerge;
public class CreateWordTableMerge3 {
static void mergeCellVertically(XWPFTable table, int col, int fromRow, int toRow) {
for(int rowIndex = fromRow; rowIndex <= toRow; rowIndex++) {
XWPFTableCell cell = table.getRow(rowIndex).getCell(col);
CTVMerge vmerge = CTVMerge.Factory.newInstance();
if(rowIndex == fromRow){
// The first merged cell is set with RESTART merge value
vmerge.setVal(STMerge.RESTART);
} else {
// Cells which join (merge) the first one, are set with CONTINUE
vmerge.setVal(STMerge.CONTINUE);
// and the content should be removed
for (int i = cell.getParagraphs().size(); i > 0; i--) {
cell.removeParagraph(0);
}
cell.addParagraph();
}
// Try getting the TcPr. Not simply setting an new one every time.
CTTcPr tcPr = cell.getCTTc().getTcPr();
if (tcPr == null) tcPr = cell.getCTTc().addNewTcPr();
tcPr.setVMerge(vmerge);
}
}
//merging horizontally by setting grid span instead of using CTHMerge
static void mergeCellHorizontally(XWPFTable table, int row, int fromCol, int toCol) {
XWPFTableCell cell = table.getRow(row).getCell(fromCol);
// Try getting the TcPr. Not simply setting an new one every time.
CTTcPr tcPr = cell.getCTTc().getTcPr();
if (tcPr == null) tcPr = cell.getCTTc().addNewTcPr();
// The first merged cell has grid span property set
if (tcPr.isSetGridSpan()) {
tcPr.getGridSpan().setVal(BigInteger.valueOf(toCol-fromCol+1));
} else {
tcPr.addNewGridSpan().setVal(BigInteger.valueOf(toCol-fromCol+1));
}
// Cells which join (merge) the first one, must be removed
for(int colIndex = toCol; colIndex > fromCol; colIndex--) {
table.getRow(row).getCtRow().removeTc(colIndex);
table.getRow(row).removeCell(colIndex);
}
}
static void setColumnWidth(XWPFTable table, int row, int col, int width) {
CTTblWidth tblWidth = CTTblWidth.Factory.newInstance();
tblWidth.setW(BigInteger.valueOf(width));
tblWidth.setType(STTblWidth.DXA);
CTTcPr tcPr = table.getRow(row).getCell(col).getCTTc().getTcPr();
if (tcPr != null) {
tcPr.setTcW(tblWidth);
} else {
tcPr = CTTcPr.Factory.newInstance();
tcPr.setTcW(tblWidth);
table.getRow(row).getCell(col).getCTTc().setTcPr(tcPr);
}
}
public static void main(String[] args) throws Exception {
XWPFDocument document= new XWPFDocument();
XWPFParagraph paragraph = document.createParagraph();
XWPFRun run=paragraph.createRun();
run.setText("The table:");
//create table
//4 rows 9 columns
XWPFTable table = document.createTable(4,9);
for (int row = 0; row < 4; row++) {
for (int col = 0; col < 9; col++) {
//table.getRow(row).getCell(col).setText("row " + row + ", col " + col);
if (row < 3) table.getRow(row).getCell(col).setColor("D9D9D9");
}
}
//defining the column widths for the grid
//column width values are in unit twentieths of a point (1/1440 of an inch)
int defaultColWidth = 1*1440*6/9; // 9 columns fits to 6 inches
int[] colunmWidths = new int[] {
defaultColWidth, defaultColWidth, defaultColWidth, defaultColWidth,
defaultColWidth, defaultColWidth, defaultColWidth, defaultColWidth, defaultColWidth
};
//create CTTblGrid for this table with widths of the 8 columns.
//necessary for Libreoffice/Openoffice to accept the column widths.
//first column
table.getCTTbl().addNewTblGrid().addNewGridCol().setW(BigInteger.valueOf(colunmWidths[0]));
//other columns
for (int col = 1; col < colunmWidths.length; col++) {
table.getCTTbl().getTblGrid().addNewGridCol().setW(BigInteger.valueOf(colunmWidths[col]));
}
//using the merge methods and setting the column widths
//horizontally merge all columns in first row
mergeCellHorizontally(table, 0, 0, 8);
setColumnWidth(table, 0, 0, colunmWidths[0]+colunmWidths[1]+colunmWidths[2]+colunmWidths[3]
+colunmWidths[4]+colunmWidths[5]+colunmWidths[6]+colunmWidths[7]+colunmWidths[8]);
//horizontally merge last two columns in second row
mergeCellHorizontally(table, 1, 7, 8);
setColumnWidth(table, 1, 7, colunmWidths[7]+colunmWidths[7]);
//vertically merge row 2 and 3 in column 1 to 7
for (int c = 0; c < 7; c++) {
mergeCellVertically(table, c, 1, 2);
}
paragraph = document.createParagraph();
FileOutputStream out = new FileOutputStream("create_table.docx");
document.write(out);
out.close();
}
}
Result: | unknown | |
d8648 | train | In newer versions of simplejson (and the json module in Python 2.7) you implement the default method in your subclasses:
from json import JSONEncoder
from pymongo.objectid import ObjectId
class MongoEncoder(JSONEncoder):
def default(self, obj, **kwargs):
if isinstance(obj, ObjectId):
return str(obj)
else:
return JSONEncoder.default(obj, **kwargs)
You could then use the encoder with MongoEncoder().encode(obj) or json.dumps(obj, cls=MongoEncoder). | unknown | |
d8649 | train | Try this:
var array = $('input[type="text"]').map(function() {
return $(this).val();
}).get();
alert(JSON.stringify(array));
Demo.
A: You can put all the forms' data in an array and join them with &
var formdata = []
$('.myclass').each(function(){
formdata.push($(this).serialize());
});
var data = formdata.join('&');
http://jsfiddle.net/6jzwR/3/ | unknown | |
d8650 | train | You can logout using session.invalidate() (or response.getSession().invalidate() in a servlet)
If using cookies, you will have to to call response.addCookie(..) with your cookie with a negative lifetime.
The auto-logout can be achieved with setting the session timeout. In web.xml
<session-config>
<session-timeout>20</session-timeout>
</session-config>
A: How are you dealing with logins and sessions? If its as simple as a session cookie you'd just expire/delete the cookie to logout
A: The way I do this on our CMS is to have a setTimeout started upon page load. This - after 20 minutes redirects the user to a page that clears the session, and hence logs the user out. Unfortunately this has one side effect of when a user has more than one window open, sometimes one window can reach the timeout period before the one the user is active in. This causes a session to timeout prematurely, and breaks flow.
One way around this caveat could be to keep an activity ID for each action the user performs (i.e. creating a content item, uploading an image). This activity ID is kept in the user table, and the timeout timer (in Javascript) can check against this ID to see if the window that has timed out is the most recently active window or not. If the ID in that window (passed from say a PHP variable into the HTML output) does not match, then it does not force a session timeout.
This is quite a tricky one to approach without introducing breaking changes to an interface. | unknown | |
d8651 | train | ExtJS CellEditing plugin does not support "canceling" an edit by the user - whenever you click into the field and then leave, the field is validated, and if that does not fail, it is "edited". This is different in RowEditing, where a cancel button is shown that would cancel the edit and fire the canceledit event without validating the input.
So you would have to use the beforeedit and validateedit events on the CellEditing plugin. How to use them is described very well in the ExtJS docs, which documents how to access the date field, the record and the store all at the same time. Example code:
beforeedit:function( editor, context, eOpts ) {
editor.oldTime = context.record.data.REA_LIT_URSPR;// Save old value.
},
validateedit:function ( editor, context, eOpts ) {
if(editor.context.value == editor.oldTime) // Compare Values and decide
context.cancel = true; // whether to save it or not
}
A: {
xtype: 'datecolumn',
format:'Y-m-d g:i:s A',
text: 'Ursp. LT',
align: 'center',
flex:1,
dataIndex: 'REA_LIT_URSPR',
editor: {
xtype: 'datefield',
format:'Y-m-d g:i:s A',
editable: false
}
}
Ok so the problem is on column date format, maybe you lose a part of date because g:i:s aren't saved from the datecolumn.
I haven't tryed the code, but if it doesn't work try to create a new date format using these docs http://docs.sencha.com/extjs/5.1/5.1.1-apidocs/#!/api/Ext.Date-method-parse (same on extjs 4)
sorry for first answer, just not clear question | unknown | |
d8652 | train | After a lot of debugging, turns out my fog_credentials hash is not going through as expected on heroku. Instead of passing "#{Rails.root}/config/gce.yml", I am doing this.
has_attached_file :avatar,
styles: {:big => "200x200>", thumb: "50x50>"},
storage: :fog,
fog_credentials: { aws_access_key_id: '<your_access_id>'
aws_secret_access_key: '<your secret>'
provider: 'Google' },
fog_directory: "google-bucket-name"
validates_attachment_content_type :avatar, content_type: /\Aimage\/.*\Z/ | unknown | |
d8653 | train | if it was possible to generate that same type of output using the GIT command line. That way, if no tool exists, I can easily script it and send out an email with my own tools.
You may be after one of the following options of git diff
*
*git diff --stat <from_commit> <until_commit>
*git diff --shortstat <from_commit> <until_commit>
Those commands, when ran against the LibGit2Sharp project, output the following
$ git diff --stat a4c6c45 14ab41c
LibGit2Sharp.Tests/StashFixture.cs | 3 ++-
LibGit2Sharp/Core/NativeMethods.cs | 8 +++-----
LibGit2Sharp/Core/Proxy.cs | 18 +++++++++---
LibGit2Sharp/ReferenceCollection.cs | 5 ++++-
4 files changed, 18 insertions(+), 16 deletions(-)
$ git diff --shortstat a4c6c45 14ab41c
4 files changed, 18 insertions(+), 16 deletions(-)
You'll note that this is pretty much the same content than from the GitHub page showcasing those changes (Note: Clicking on the "Show Diff Stats" will show the per file counts). | unknown | |
d8654 | train | You can do this easily with a Python UDF:
create or replace function py_unescape(X string)
returns string
language python
handler = 'x'
runtime_version = 3.8
as $$
import html
def x(s):
return html.unescape(s)
$$
;
select py_unescape('À makes me feel Á');
-- À makes me feel Á | unknown | |
d8655 | train | The input function returns a string (str). To convert it to an int you need to use the int function:
power = int(input("How much power would you like to have?(power goes from 1 to a 100)"))
Note that int() will raise a ValueError if the string the user inputs isn't one that can be interpreted as an integer.
If you want to repeatedly prompt the user until they provide a valid value, use a loop with a try/except:
while True:
try:
power = int(input(
"How much power would you like to have? (power goes from 1 to 100)"
)) # raises ValueError if not an int
assert 1 <= power <= 100 # raises AssertionError if not in range
except (AssertionError, ValueError):
continue # prompt again
else:
break # continue on with this power value | unknown | |
d8656 | train | You would need to replace .load() with .get() for instance.
$.get('page.php?val='+myvalue, function( data ) {
console.log( data );
// you might want to "store" the result in another variable here
});
One word of caution: the data parameter in the above snippet does not necesarilly shim the responseText property from the underlaying XHR object, but it'll contain whatever the request returned. If you need direct access to that, you can call it like
$.get('page.php?val='+myvalue, function( data, status, jXHR ) {
console.log( jXHR.responseText );
});
A: Define a callback function like:
$("#div").load('page.php?val='+myvalue, function(responseText) {
myVar = responseText;
});
Noticed everyone else is saying use $.get - you dont need todo this, it will work fine as above.
.load( url [, data] [, complete(responseText, textStatus, XMLHttpRequest)] )
See http://api.jquery.com/load/
A: Yes, you should use $.get method:
var variable = null;
$.get('page.php?val='+myvalue,function(response){
variable = response;
});
More on this command: http://api.jquery.com/jQuery.get/
There's also a $.post and $.ajax commands
A: Try something like this:
var request = $.ajax({
url: "/ajax-url/",
type: "GET",
dataType: "html"
});
request.done(function(msg) {
// here, msg will contain your response.
}); | unknown | |
d8657 | train | You are trying to initialize request.Result with new Result() which has no values. That may cause this error. | unknown | |
d8658 | train | I am the author of the blog you refer to. Let me try and answer your question.
Your comment from Mar 15 describes a proxy approach. What you should try to do is, once your proxy has received an SSO token you should pass that on to the client, using a SET-COOKIE header.
So when you successfully authenticate to SAP you get an SSO token an HTTP header of the response.
E.g.
set-cookie: MYSAPSSO2=AjQxMDM.....BABhHAFcA%3d%3d; path=/; domain=esworkplace.sap.com
Your proxy should simply pass that on to the client's browser and change the domain name to that of the proxy, otherwise the client will not use it.
set-cookie: MYSAPSSO2=AjQxMDM.....BABhHAFcA%3d%3d; path=/; domain=yourproxydomain.com
Next time the browser makes a request to your proxy it will automatically include this session cookie in the request header, like this:
Cookie: MYSAPSSO2=AjQxMDMBABhH......%2fjmaRu5sSb28M6rEg%3d%3d
Your proxy can read that cookie from the HTTP request headers and use it to make a call.
I hope this helps.
A: I'm responsible for SAPUI5 - although I'm not 100% sure whether I completely understand the issue, I'll try to answer. The SAPUI5 calls to read data use XMLHttpRequests and thus all certificates or cookies are sent along with the requests automatically. Futhermore, Gateway is expected to accept these (valid) certificates.
So following the answer from Istak and using cookies with a proper domain, it should just work without the need of an API in UI5.
Anyhow, if I missed something, please explain more in detail.
Best regards
Stefan
A: Not Sure about SAPUI5 and oData, I have used MYSAPSSO2 token with Java EE web applications / sencha touch based apps which connect sto SAP backend systems with SSO. You simply pass the token as a cookie in the http request.
There are many ways of doing this, the one I used was SimpleClientHttpRequestFactory or you could do that in UrlConnection itself. | unknown | |
d8659 | train | It looks like the two sheets in your example are at different zoom levels, theorizing this may be an excel bug:
Have you tried with the zoom levels set the same on the active sheet and the sheet containing the plot? If that works you could try getting the location of two vertically adjacent cells on both sheets and then using the difference of the .top values as the 'row height' on each sheet and using the ratio between the active sheet and the target plot's sheets row height as a conversion factor.
Another thought (an ugly kludge, since the row height difference is probably not accurate enough) is that you could turn application.screenupdating off (if it's not already 'false'), then create a plot on the active page and get its height and width, then compare that to the height and width of a new plot on the target sheet (and use that for the conversion factor).
A: I think it has something to do with not updating "graphically", try making the graph(s) invisible at the beginning of the code and visible again at the end:
With Sheets("Overview").ChartObjects("Spending_Chart")
.Visible = False
'Do all your stuff including setting the .Top
.Visible = True
End With
A: Hard to say without more of your code. Your snippet works fine on my sample workbook. I don't think this is a code problem as much as an Excel/VBA bug you are hitting. More on that below.
One quick comment up front... I don't like your syntax of Shapes.Range(Array("Add_Category_Button")). I would prefer the shorter Shapes("Add_Category_Button") but both work the same for me.
Here is my test routine:
Sub my_test()
Dim i As Integer
For i = 1 To ThisWorkbook.Sheets.Count
ThisWorkbook.Sheets(i).Activate
Debug.Print ActiveSheet.Name
Sheets("Overview").Shapes("Pie_Chart").Top = _
Sheets("Overview").Shapes("Add_Category_Button").Top + 22
Debug.Print "1st: " & _
Sheets("Overview").Shapes("Add_Category_Button").Top & _
" + 22 = " & _
Sheets("Overview").Shapes("Pie_Chart").Top
Sheets("Overview").Shapes("Column_Chart").Top = _
Sheets("Overview").Shapes("Pie_Chart").Top + _
Sheets("Overview").Shapes("Pie_Chart").Height
Debug.Print "2nd: " & _
Sheets("Overview").Shapes("Pie_Chart").Top & _
" + " & _
Sheets("Overview").Shapes("Pie_Chart").Height & _
" = " & _
Sheets("Overview").Shapes("Column_Chart").Top
Next i
End Sub
And my result:
Overview
1st: 64 + 22 = 86
2nd: 86 + 216 = 302
Interval
1st: 64 + 22 = 86
2nd: 86 + 216 = 302
1-16 to 1-14
1st: 64 + 22 = 86
2nd: 86 + 216 = 302
1-17 to 1-15
1st: 64 + 22 = 86
2nd: 86 + 216 = 302
Control
1st: 64 + 22 = 86
2nd: 86 + 216 = 302
I tried using different view percentages and got consistent results, though the numbers above changed any time I altered the view percentage on the Overview sheet.
I did find I could create some wrong results if I had 2 windows of the same workbook open via View > New Window:
Interval
1st: 75.33331 + 22 = 114.6666
2nd: 114.6666 + 258.5001 = 445.3889
1-16 to 1-14
1st: 75.33331 + 22 = 114.6666
2nd: 114.6666 + 310.5001 = 508.9445
1-17 to 1-15
1st: 75.33331 + 22 = 114.6666
2nd: 114.6666 + 374.0557 = 584.0556
Control
1st: 75.33331 + 22 = 114.6666
2nd: 114.6666 + 449.1668 = 676.5002
In my case, I think the height of the chart was changing after setting the .top property. If you have more than one window or form open, I could imagine that you could have a similar issue. This was very finicky and hard to replicate, but I suspect it has to do with situations where .top was changed with different windows and view percentages involved. When I just had one window and no forms my results were solid.
In my file, the chart height and width seemed to be altered as well, and I was seeing issues where the chart rendered improperly (for example, the selection box was a different size than the chart as displayed, or part of the chart was truncated). This is why I think this may be a bug.
The rendering errors always reset when I reset the chart size, so maybe try also setting the height/width in your code?
Sheets("Overview").Shapes("Add_Category_Button").Top = 58
Sheets("Overview").Shapes("Add_Category_Button").Height = 22
Sheets("Overview").Shapes("Pie_Chart").Height = 200
Sheets("Overview").Shapes("Pie_Chart").Width = 300
Sheets("Overview").Shapes("Column_Chart").Height = 200
Sheets("Overview").Shapes("Column_Chart").Width = 300 | unknown | |
d8660 | train | You are almost there, this is a working code:
val typedArray = context.obtainStyledAttributes(it, R.styleable.CustomView, 0, 0)
val image = typedArray.getResourceId(R.styleable.CustomView_ myImage, -1) // the -1 parameter could be your placeholder e.g. R.drawable.placeholder_image
and then you have the resource of your drawable that you can work with, as for the example:
imageView.setImageDrawable(ContextCompat.getDrawable(context, image))
or if you would like to get a drawable directly call:
val image = typedArray.getDrawable(R.styleable.CustomView_myImage)
but remember that this way your drawable might be null.
Enjoy coding! | unknown | |
d8661 | train | That's how javascript works. To solve this you can use:
{icon: 'fa-plus', display: 'New', action: this.onNew.bind(this) }
Running example: https://gist.run/?id=cefe45c5a402c348d01d41d9cde42489
Explanation:
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_objects/Function/bind
Javascript call() & apply() vs bind()?
https://alexperry.io/personal/2016/04/03/How-to-use-apply-bind-and-call.html | unknown | |
d8662 | train | If currentSong.voters is just an array, you can go with two solutions:
ES6:
currentSong.voters.includes(Meteor.userId())
ES5:
currentSong.voters.indexOf(Meteor.userId()) > -1
or a shorthand
~currentSong.voters.indexOf(Meteor.userId()) | unknown | |
d8663 | train | It is performed in function you mentioned. Center coordinates are shifted because when you rotate image, top-left corner(origin) is moved, so they have to compensate it. And scale doesn`t change at all. | unknown | |
d8664 | train | *
*Remove or comment out gem 'coffee-rails' from Gemfile.
*Change Javascript files that ends with .js.coffee to .js.
*Add config.generators.javascript_engine = :js to your application.rb.
*Make sure your tmp cache is cleared with rake tmp:cache:clear | unknown | |
d8665 | train | It's described further down that document, right here: https://stripe.com/docs/billing/subscriptions/fixed-price#manage-subscription-payment-failure | unknown | |
d8666 | train | First lets assume that our input comes in the form of a list of tuples T = [(A[0], B[0], C[0]), (A[1], B[1], C[1]) ... (A[N - 1], B[N - 1], C[N - 1])]
The first observation we can make is that we can sort on T[0] (in reverse order). Then for each tuple (a, b, c), to determine if it cannot win, we ask if we've already seen a tuple (d, e, f) such that e > b && f > c. We don't need to check the first element because we are given that d > a* since T is sorted in reverse.
Okay, so now how do we check this second criteria?
We can reframe it like so: out of all tuples (d, e, f), that we've already seen with e > b, what is the maximum value of f? If the max value is greater than c, then we know that this tuple cannot win.
To handle this part we can use a segment tree with max updates and max range queries. When we encounter a tuple (d, e, f), we can set tree[e] = max(tree[e], f). tree[i] will represent the third element with i being the second element.
To answer a query like "what is the maximum value of f such that e > b", we do max(tree[b+1...]), to get the largest third element over a range of possible second elements.
Since we are only doing suffix queries, you can get away with using a modified fenwick tree, but it is easier to explain with a segment tree.
This will give us an O(NlogN) solution, for sorting T and doing O(logN) work with our segment tree for every tuple.
*Note: this should actually be d >= a. However it is easier to explain the algorithm when we pretend everything is unique. The only modification you need to make to accommodate duplicate values of the first element is to process your queries and updates in buckets of tuples of the same value. This means that we will perform our check for all tuples with the same first element, and only then do we update tree[e] = max(tree[e], f) for all of those tuples we performed the check on. This ensures that no tuple with the same first value has updated the tree already when another tuple is querying the tree. | unknown | |
d8667 | train | Support Map Fragment extends androidx.fragment.app.Fragment.
You are importing android.support.v4.app.FragmentActivity which uses android.support.v4.app.Fragment. These are two different classes so they are incompatible.
You need to migrate your app to Android X: https://developer.android.com/jetpack/androidx/migrate
Then, in your build.gradle file, replace com.android.support:support-fragment with androidx.fragment:fragment.
Hope that helps! | unknown | |
d8668 | train | You just have to complete your animation logic. The advantage of this approach is its more verbose but still only one DOM lookup.
$("#branding").click(function () {
var $element = $(this);
var isVisible = $element.hasClass('showItem');
if(isVisible)
$element.removeClass("showItem");
} else {
$element.addClass("showItem");
}
});
http://jsfiddle.net/ngau3xjL/4/ | unknown | |
d8669 | train | Do the same optimization in LibreOffice Calc. Algorithms done in LibreOffice Calc are available as part of the open-source project. | unknown | |
d8670 | train | I did a little more digging around and I ended up haphazardly stumbling on the answer. I was missing "Integrated Security=SSPI" in my connection string and it turns out I didn't need the dot before "\SQLEXPRESS" in my data source. Here's the connection string that worked for me:
adodbapi.connect(r'Provider=SQLOLEDB;Data Source=COMPUTERNAME\SQLEXPRESS;Initial Catalog=Test;User ID=COMPUTERNAME\USERNAME; Password=PASSWORD;Integrated Security=SSPI') | unknown | |
d8671 | train | If anyone is still having this problem, which exists on all mx NumericSteppers, here is what Adobe had to say:
https://bugs.adobe.com/jira/browse/SDK-18278 | unknown | |
d8672 | train | Please replace:
<Fragment
with:
<fragment
Also, you can get rid of the redundant/incorrect namespace declarations in that element.
Also also, in the future, post the complete stack trace, not just part of one line, to make it easier for people to help you. | unknown | |
d8673 | train | In a very similar way to how you reflect constant buffers:
ID3D11ShaderReflection* reflectionInterface;
D3DReflect(bytecode, bytecodeLength, IID_ID3D11ShaderReflection, (void**)&reflectionInterface);
D3D11_SHADER_INPUT_BIND_DESC bindDesc;
reflectionInterface->GetResourceBindingDescByName("textureMap", &bindDesc);
bindDesc.BindPoint is the index of the slot the texture is bound to. | unknown | |
d8674 | train | I think you'll need to decode the JSON first.
}).done(function(data){
data = JSON.parse(data);
console(data['post']);
});
A: You can use basic JS too to attain this.
// property is an optional parameter.
function disp(obj, property) {
var prop;
if (property) {
obj[property] && (console.log(obj[property]));
} else {
for (prop in obj) {
if (obj.hasOwnProperty(prop)) {
console.log(prop + " = " + obj[prop])
}
}
}
}
var jsondata = {
"id": "10",
"skills": "english",
"post": "devloper",
"emp_name": "jaydeep",
"timestemp": "10:45"
}
//disp(jsondata, "post");
disp(jsondata); | unknown | |
d8675 | train | I have looked at source code and have found that default value for MaxWorkerThreads is set to 100
private static readonly ConfigurationProperty _propMaxWorkerThreads = new ConfigurationProperty("maxWorkerThreads", typeof (int), (object) 100, (TypeConverter) null, (ConfigurationValidatorBase) new IntegerValidator(1, 2147483646), ConfigurationPropertyOptions.None);
This field is added to properties collection in static constructor
ProcessModelSection._properties.Add(ProcessModelSection._propMaxWorkerThreads);
In property definition they do set default value to 20
[IntegerValidator(MaxValue = 2147483646, MinValue = 1)]
[ConfigurationProperty("maxWorkerThreads", DefaultValue = 20)]
public int MaxWorkerThreads
But this obviously give no effect. Maybe it's some kind of legacy implementation. By the way it behaves this way only if autoConfig is set to false. When it's set to true I have 32K worker threads in my application. Probably this behavior depends on IIS version.
A: According to the MSDN,
the default maximum [number of threads in the ASP.net server pool] for
.NET 4.5 is 5,000
Source | unknown | |
d8676 | train | "Automation error" points to an error in resolving the proper Net dll's. This may be caused by the fact that the Net Framweworks (1.1., 3(.5),4.0) on the XP machine may not be the same as the Win7 box. Alternatively the file structure of the Net dll's is wrong and some dll's cannot be found.
I have had good results by using fuslogvw to troubleshoot these issues. | unknown | |
d8677 | train | It sounds like you are looking for a FocusListener. The Text control inherits addFocusListener() etc. from Control, so check the inherited methods section of its API docs.
A: *
*Save the text content to a variable on focus gain, then on focus lost compare it with the latest text - if different then text is modified else not.
*This listener will not be called on each and every character change.
*If you simply traverse the controls(with TAB key) then also you can detect whether text is changed or not.
import org.eclipse.swt.SWT;
import org.eclipse.swt.events.FocusEvent;
import org.eclipse.swt.events.FocusListener;
import org.eclipse.swt.layout.GridData;
import org.eclipse.swt.layout.GridLayout;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;
import org.eclipse.swt.widgets.Text;
public class Snippet19 {
private static String temp = "";
public static void main(String[] args) {
Display display = new Display();
Shell shell = new Shell(display);
shell.setLayout(new GridLayout());
final Text text = new Text(shell, SWT.BORDER);
text.setLayoutData(new GridData());
text.addFocusListener(new FocusListener() {
@Override
public void focusLost(FocusEvent e) {
if (temp.equals(text.getText())) {
System.out.println("Text not modified");
} else {
System.out.println("Text conent modified");
}
}
@Override
public void focusGained(FocusEvent e) {
temp = text.getText();
}
});
final Text text1 = new Text(shell, SWT.BORDER);
text1.setText("chandrayya");
text1.setLayoutData(new GridData());
final Text text2 = new Text(shell, SWT.BORDER);
text2.setText("chandrayya");
text2.setLayoutData(new GridData());
shell.open();
while (!shell.isDisposed()) {
if (!display.readAndDispatch())
display.sleep();
}
display.dispose();
}
} | unknown | |
d8678 | train | There was one added recently: commit
The meat is in the Java code:
package demo.oauth;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import static org.apache.commons.codec.digest.DigestUtils.md5Hex;
import static org.apache.commons.codec.digest.DigestUtils.sha256Hex;
public class Signer {
public static void sign(String token, Map<String, String> params) {
List<String> list = new ArrayList();
String tokenClientSlat = "";
for (String key : params.keySet()) {
if (key.equals("token_client_salt")) {
tokenClientSlat = params.get(key);
}
String paramString = key + "=" + params.get(key);
list.add(paramString);
}
Collections.sort(list);
StringBuilder sb = new StringBuilder();
for (String s : list) {
sb.append(s);
}
sb.append(token);
String sig = md5Hex(sb.toString());
String tokenSig = sha256Hex(sig + tokenClientSlat);
params.put("sig", sig);
params.put("__NStokensig", tokenSig);
}
}
And then the feature:
* def Signer = Java.type('demo.oauth.Signer')
* def params =
"""
{
'userId': '399645532',
'os':'android',
'client_key': '3c2cd3f3',
'token': '141a649988c946ae9b5356049c316c5d-838424771',
'token_client_salt': 'd340a54c43d5642e21289f7ede858995'
}
"""
* eval Signer.sign('382700b563f4', params)
* path 'echo'
* form fields params
* method post
* status 200 | unknown | |
d8679 | train | The easiest way to use strict mode is to use an IIFE (immediately Invoked Function Expression) like so:
(function()
{
'use strict';
var foo = 123;//works fine
bar = 345;//ReferenceError: bar is not defined
}());
To create a new-line in the console, use shift + enter, or write your code in a separate editor first, then copy-paste it to the console. Setting up a fiddle is all fine and dandy, but just test your code with the markup it was written for (ie: just clear the browser cache and test).
However, I'd urge you to install node.js, still. It's a lot easier to test your code, or validate it (both syntactically and coding-style wise) using JSHint. There are also a lot of ways to examine and your code that run out of node.js, so it's a really good development tool to have | unknown | |
d8680 | train | The easiest way would be to create a wrapper object around the actual db abstraction object(s).
For example, if there is an object of type "db" that provides you some convienance functions such as "select" and "update", you could write a class that extends "db" and overrides the "select function". It might look something like this (its an example as you have not provided enough info on your specific implementation).
class db2 extends db
{
public function select($tableName, $whereClause)
{
$result = parent::select($tableName, $whereClause);
return strip_tags($result);
}
}
Then you would replace your object that instantiated "db" and instead instantiate "db2".
$db = new db($connectionParams);
should be replaced with
$db = new db2($connectionParams);
Now all your existing queries should use the new function which removes the tags. | unknown | |
d8681 | train | In your dbase query wizard, select the "Single value" option. This will adjust the SQL code to use the firstResult query, which take the first entry it finds for your query, in your case the first time it finds "Lala".
Kudos on the data adjustments for the question :D | unknown | |
d8682 | train | SOLVED
Since the first time the error was raised by the use of bulit-in python str() function , while other elements of python syntax did not raise any error, I guessed python built-in functions cannot be interpreted by Ansible (still I don't understand why).
So I looked up for a way to do the data manipulation by using some python methods of the object, instead than a python function.
So instead of turning the int object into a string with str(my_object), I exploited the int python method .__str__().
So I substituted the upper line with
myhost-{{ hostvars[host].number.__str__().zfill(3) }}
and this time it worked.
Conclusion
This makes me think one cannot use python functions inside ansible {{ }} tags, but only python objects methods. | unknown | |
d8683 | train | Would appreciate if someone could tell me why by just by adding one new input causes this tool to crash?!
You can't add two input statement inside the same configuration. Like the documentation says, if you want to add more than one input in a config file, you should use something like that:
input {
file {
path => "/var/log/messages"
type => "syslog"
}
file {
path => "/var/log/apache/access.log"
type => "apache"
}
} | unknown | |
d8684 | train | Use :nth-child(odd).
Answer 1:
If you want all the odd numbers, then do this:
.rules-container .ng-star-inserted:nth-child(odd) {
background-color: red;
}
<div class="rules-container">
<div class="ng-star-inserted">
<div class="rules-form">
aaaaaaaaaaaa
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
bbbbbbbbbb
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
ccccccccccccc
</div>
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
dddddddddddddd
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
eeeeeeeeeeeeee
</div>
</div>
Answer 2:
If you only want 1 & 3, not 5 , then change the css to:
.rules-container .ng-star-inserted:nth-child(1), .rules-container .ng-star-inserted:nth-child(3) {
background-color: red;
}
<div class="rules-container">
<div class="ng-star-inserted">
<div class="rules-form">
aaaaaaaaaaaa
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
bbbbbbbbbb
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
ccccccccccccc
</div>
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
dddddddddddddd
</div>
</div>
<div class="ng-star-inserted">
<div class="rules-form">
eeeeeeeeeeeeee
</div>
</div> | unknown | |
d8685 | train | The following code will do the trick:
import re
data = '''
#% text_encoding = utf8
:xy_name1 Text
:xy_name2 Text text text to a text.
Text and text to text text, text and
text provides text text text text.
:xy_name3 Text
'''
print(re.findall(r'^:(\S+)\s+([\S\s]*?)(?=\n:|\Z)',data,re.M))
The last parameter in the re.findall is a flag that makes the search a multi-line search.
^:(\S+) will match the beginning of any line followed by a colon and at least one non-space character
\s+ then consumes the tab and spaces before the description
([\S\s]*?) matches the description beginning with the first non-space character and including everything in its way - newlines inclusive. You can not use . here because in a multi-line search the . is not matching the newline character. That is why I used [\S\s] which matches all non-space characters and all space characters. The ? at the end makes the * non-greedy. Otherwise that group would consume everything all the way to the end of the data.
(?=\n:|\Z) marks the end of the description. This group is a positive look-ahead which matches either a newline followed by a colon (\n:) or the end of the data (\Z). A look-ahead does not consume the newline and the colon therefore they will be available for the next match of the findall.
The output of above code is
[('xy_name1', 'Text\n'), ('xy_name2', 'Text text text to a text. \n\nText and text to text text, text and \n\ntext provides text text text text.\n'), ('xy_name3', 'Text\n')]
Try it out here!
A: You can use capture the ENTRY_NAME in group 1.
For the ENTRY_DESCRIPTION in group 2 you can match the rest of the line, followed by all lines that do not start with the entry name pattern.
^:([\w:|!.?%()-]+)\t(.*(?:\n(?!:[\w:|!.?%()-]+\t).*)*)
Regex demo | Python demo
Example
import re
pattern = r"^:([\w:|!.?%()-]+)\t(.*(?:\n(?!:[\w:|!.?%()-]+\t).*)*)"
s = ("#% text_encoding = utf8\n\n"
":xy_name1 Text\n\n"
":xy_name2 Text text text to a text. \n\n"
"Text and text to text text, text and \n\n"
"text provides text text text text.\n\n"
":xy_name3 Text")
print(re.findall(pattern, s, re.MULTILINE))
Output
[
('xy_name1', 'Text\n'),
('xy_name2', 'Text text text to a text. \n\nText and text to text text, text and \n\ntext provides text text text text.\n'),
('xy_name3', 'Text')
] | unknown | |
d8686 | train | Two tables,
*
*wish_list and
*wish_list_item
Solution B:
One table,
*
*wish_list_with_item
This would have a wish list item per column, so it will be many columns on this table.
Which is better?
A: Solution A is better. Anytime you try to store collections in columns rather than rows, you're going to run into problems.
A: solution A is normalized and solution B is not. In most situations solution A will be better and more flexible. The major time this is not true is if you are making a summary table of some complex join on large tables for use as a source of quick queries for common questions. Unless you are building a datamart, this is unlikely to be the case. Go with solution A.
A: I would definitly go with the first solution :
*
*at least, it means you can have as many items you want -- and it'll probably be easier to deal with, on the application side (think about "deleting an item", or "adding an item", for instance)
*the second solution absolutly doesn't feel right
A: The second solution would be better only if you are forced to do a denormalization for the sake of easier caching or if your application grows immensely and you need database sharding. In all other cases I prefer a cleaner design of the database.
A: Multi - table is complicated, so, single table is practical, simple and thus worth considering. Assuming wish list is a feature of a web site, where some users come, create user account, and then list their wishes it might not be completely obvious, but if you think of it, the question needs to be asked, why would you want to have a header table defining wish list(s) instead of having just the second table, listing wishes. In my mind the need for such an approach indicates that each user will ( likely ) want to have more than one wish list. One for close friends and relatives and one for space ship adventure planned for year 2050 :) Otherwise all the wishes could probably be neatly listed in the wish detail ( second ) table, without the first, for one huge wish list ( let's call it default wish list ), without distinction between wishes, be that something you would like for a present, or plan to accomplish while preparing for space ship adventures ( may all your wishes come true! :) ).
Going back to design of tables, here is considerations. If you do not plan to
*
*use wish list for wish list fulfillment
*add any specifics, details for each of the "whishes" separately
than storing list in something like a long string field allows single table.
If you need 2, than adding second table is justified, but still can be avoided by turning "long string" field into XML column with attributes, where you can pair wished item and some additional comments and / or information.
If you need 1, than two table solution is required, so that a relationship can be established between items on the wish list and anything related to this specific item, be that a gift from close friend or relative, which fulfills the wish, or something like catalog of things stored in yet another table.
If you do use multi - table approach than you will need to be careful so that table listing individual wish items didn't have entries without matching header entry defining the list itself. Usually databases use foreign key to enforce such a relationship. So, the "item" table will need to have a field with a key from the parent table.
Summarizing, it is very likely you will have 2 or 3 tables.
Table 1 - listing user accounts
In the simplest case this table will have a field "my wish list" with long string and / or XML column.
Table 2 - enumerating user wish lists with a key noting related user account.
Table 3 - enumerating wish items, with key from either user account ( if user can have only one wish list ) or Table 2 if each user can have multiple distinct wish lists.
Finally, in theory, users can have shared wish lists. For example, Kate and Simon plan to marry in year 2025. They wish to have a wedding ceremony in a cabin on Vashon Island ( the line is long and last time I checked the place is booked 4 years in advance ). In which case a possibility of 4th table comes along. This table will pare together "users" and "wish lists", to enable joined ownership of wish list.
Hope this helps.
-- cheers! | unknown | |
d8687 | train | Under Linux, you can use the "inotify" tools. they probably arrive with all major destributions. here is wiki for it : wiki - Inotify
Note in the supported events list you have:
IN_CLOSE_WRITE - sent when a file opened for writing is closed
IN_CLOSE_NOWRITE - sent when a file opened not for writing is closed
these are what you look for. I did not manage to see something similar on windows.
Now as for using them, there can be various ways. I used a java library jnotify
Note that the library is cross platform, so you don't want to use the main class as windows does not support events for close file. you will want to use the linux API which exposes the full linux capabilities. just read the description page and you know its what you request : jnotify - linux
Note that in my case, I had to download the library source because I needed compile the shared object file "libjnotify.so" for 64 bit. the one provided worked only under 32 bit. Maybe they provide it now you can check.
check the examples for code and how to add and remove watches. just remember to use the "JNotify_linux" class instead of "JNotify" and then you can use a mask with your operation, for example.
private final int MASK = JNotify_linux.IN_CLOSE_WRITE;
I hope it will work for you. | unknown | |
d8688 | train | This is how:
msedge.exe --kiosk https://google.com/ --edge-kiosk-type=fullscreen --no-first-run
You can find more information about Edge and kiosk mode:
https://learn.microsoft.com/en-us/deployedge/microsoft-edge-configure-kiosk-mode | unknown | |
d8689 | train | I was able to find the answer to my problem:
When I added content to the deployment project, the dll was not in the bin. When I dragged it into the bin, the program worked.
A: For the benefit of others who might have had the same error, it can also come due to the ASP.NET runtime being unable to locate the /bin folder.
For this, make sure you have marked the virtual directory containing your application as a web application root.
e.g. In godaddy web hosting you should mark Set Application Root to the folder/sub-folder containing your application. (Making it a virtual directory is not enough).
Hope it will help someone.
A: Just replace the CodeBehind attribute with CodeFile | unknown | |
d8690 | train | I think what you want is something like this:
for (id key in dictionary) {
NSLog(@"key: %@, value: %@", key, [dictionary objectForKey:key]);
}
Taken from here.
A: This is also a good choice if you like blocks.
[dict enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop) {
}]
A: for (id key in mydictionary) {
id mything = [mydictionary objectForKey:key];
}
A: keyEnumerator returns an object that lets you iterate through each of the keys in the dictionary. From here
A: To my knowledge, at the time when you asked the question, there was only one way of traversing keys and values at the same time: CFDictionaryApplyFunction.
Using this Core Foundation function involves an ugly C function pointer for the body of the loop and a lot of not-less-ugly, yet toll-free-bridged casting.
@zekel has the modern way of iterating a dictionary. Vote him up! | unknown | |
d8691 | train | Use:
df = df.sort_values('is_eval', kind='mergesort', ascending=False).drop_duplicates(['timestamp','id','ch'])
print (df)
timestamp id ch is_eval c
2 12 1 1 True 4
1 13 1 0 False 1 | unknown | |
d8692 | train | It's about how you're invoking the UI update, check the AppendText bellow.
private BackgroundWorker bw1;
private void button1_Click(object sender, EventArgs e)
{
bw1 = new BackgroundWorker();
bw1.DoWork += new DoWorkEventHandler(bw_DoWork);
bw1.RunWorkerCompleted += bw_RunWorkerCompleted;
bw1.RunWorkerAsync();
}
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
string path = @"F:\DXHyperlink\Book.txt";
if (File.Exists(path))
{
string readText = File.ReadAllText(path);
foreach (string line in readText.Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries))
{
AppendText(line);
Thread.Sleep(500);
}
}
}
private void AppendText(string line)
{
if (richTextBox1.InvokeRequired)
{
richTextBox1.Invoke((ThreadStart)(() => AppendText(line)));
}
else
{
richTextBox1.AppendText(line + Environment.NewLine);
}
}
In addition to that reading the whole file text is very inefficient. I would rather read chunk by chunk and update the UI. i.e.
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
string path = @"F:\DXHyperlink\Book.txt";
const int chunkSize = 1024;
using (var file = File.OpenRead(path))
{
var buffer = new byte[chunkSize];
while ((file.Read(buffer, 0, buffer.Length)) > 0)
{
string stringData = System.Text.Encoding.UTF8.GetString(buffer);
AppendText(string.Join(Environment.NewLine, stringData.Split(new[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries)));
}
}
}
A: You don't want to concatenate strings in a loop.
A System.String object is immutable. When two strings are
concatenated, a new String object is created. Iterative string
concatenation creates multiple strings that are un-referenced and must
be garbage collected. For better performance, use the
System.Text.StringBuilder class.
The following code is very inefficient:
for (int i = 0; i < lines.Length; i++)
{
richEditControl1.Text += lines[i] + "\n";
}
Try instead:
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
// Cpu intensive work happens in the background thread.
var lines = string.Join("\r\n", lines);
// The following code is invoked in the UI thread and it only assigns the result.
// So that the UI is not blocked for long.
Invoke((ThreadStart)delegate()
{
richEditControl1.Text = lines;
});
}
A: why do you want to split lines and join them again?
strings are immutable witch means cant be changed. So every time you do Text+= "..." it has to create new string and put it in Text. So for 10 mb string its not ideal way and it may take Centuries To complete such task for Huge strings.
You can see What is the difference between a mutable and immutable string in C#?
If you really want to split Them and Join them again. then StringBuilder is the right option for you.
StringBuilder strb = new StringBuilder();
for (int i = 0; i < lines.Length; i++)
{
strb.Append(lines[i] + "\n");
}
richEditControl1.Text = strb.ToString();
You can see String vs. StringBuilder
Structure of StringBuilder is List of characters. Also StringBuilder is Muttable. means can be changed.
Inside Loop you can do any extra task with string and Add the final result to the StringBuilder. Finally after the loop Your StringBuilder is Ready. You have to convert it to string and Put it in Text.
A: It took me a while to nail this one..
Test one & two:
First I created some clean data:
string l10 = " 123456789";
string l100 = l10 + l10 + l10 + l10 + l10 + l10 + l10 + l10 +l10 + l10;
string big = "";
StringBuilder sb = new StringBuilder(10001000);
for (int i = 1; i <= 100000; i++)
// this takes 3 seconds to load
sb.AppendLine(i.ToString("Line 000,000,000 ") + l100 + " www-stackexchange-com ");
// this takes 45 seconds to load !!
//sb.AppendLine(i.ToString("Line 000,000,000 ") + l100 + " www.stackexchange.com ");
big = sb.ToString();
Console.WriteLine("\r\nStringLength: " + big.Length.ToString("###,###,##0") + " ");
richTextBox1.WordWrap = false;
richTextBox1.Font = new System.Drawing.Font("Consolas", 8f);
richTextBox1.AppendText(big);
Console.WriteLine(richTextBox1.Text.Length.ToString("###,###,##0") + " chars in RTB");
Console.WriteLine(richTextBox1.Lines.Length.ToString("###,###,##0") + " lines in RTB ");
Displaying 100k lines totalling in around 14MB takes either 2-3 seconds or 45-50 secods.
Cranking the line count up to 500k lines brings up the normal text load time to around 15-20 seconds and the version that includes a (valid) link at the end of each line to several minutes.
When I go to 1M lines the loading crashes VS.
Conclusions:
*
*It takes a 10+ times longer to load a text with links in it and during that time the UI is freezing.
*Loading 10-15MB of textual data is no real problem as such.
Test three:
string bigFile = File.ReadAllText("D:\\AllDVDFiles.txt");
richTextBox1.AppendText(bigFile);
(This was actually the start of my investigation..) This tries to load a 8 MB large file, containing directory and file info from a large number of data DVDs. And: It freezes, too.
As we have seen the file size is not the reson. Nor are there any links embedded.
From the first looks of it the reason are funny characters in some of the filenames.. After saving the file to UTF8 and changing the read command to..:
string bigFile = File.ReadAllText("D:\\AllDVDFiles.txt", Encoding.UTF8);
..the file loads just fine in 1-2 seconds, as expected..
Final conclusions:
*
*You need to watch out for wrong encoding as those characters can freeze the RTB during loading.
*And, when you add links you must expect for the loading to take a a lot (10-20x) longer than the pure text. I have tried to trick the RTB by preparing a Rtf string but it didn't help. Seems that analyzing and storing all those links will always take such a long time.
So: If you really need a link on every line, do partition the data into smaller parts and give the user an interface to scroll & search through those parts.
Of course appending all those lines one by one will always be way too slow, but this has been mentionend in the comments and other answers already. | unknown | |
d8693 | train | Just run ./dev/change-scala-version.sh 2.11 from your spark directory to switch all the code to 2.11. Then run mvn (3.3.3+) or make-distribution.sh with your flags set.
A: Refer to Angelo Genovese's comment, do not include -Dscala-2.11 in build command.
A: If you don't specifically need spark-sql, then just exclude sql related modules from build:
mvn clean package -Dscala-2.11 -DskipTests -pl '!sql/core,!sql/catalyst,!sql/hive'
A: I was running into this problem also, in a project that I'd imported into IntelliJ from a Maven pom.xml. My co-worker helped me figure out that although <scope>runtime</scope> is okay for most dependencies, this particular dependency needs to be <scope>compile</scope> (for reasons we don't understand):
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-reflect</artifactId>
<version>${scala.version}</version>
<scope>compile</scope>
</dependency>
A: This build issue can be overcome by first changing the scala version from 2.10 to 2.11 by running 'change-scala-version.sh' command located @ spark-1.6.1/dev/change-scala-version.sh 2.11
Refer the below link for detailed info.
http://gibbons.org.uk/spark-on-windows-feb-2016 | unknown | |
d8694 | train | You just need to put the .flex in upper level like the below:
<div className='flex align-center'>
{data.map((x, index)=>{<PharmacyCard className="relative" key={index} props={x} />})}</div>
hope this link will assist you to get the flexbox better
https://codepen.io/enxaneta/full/adLPwv
A:
So i want to see like 3 component in a row.
Since you have clear idea about the layout, I suggest to wrap you components in another div with grid and three columns; something like this:
<div className="grid grid-cols-3">
{data.map((x, index)=>{<PharmacyCard className="relative" key={index} props={x} />})}
</div>
Depending on your design, you may add gap class as well.
Also consider removing w-3/12 class from components so they fill grid's width. | unknown | |
d8695 | train | table {
border: 25px solid green;
}
instead of
table {
border: 25px green;
}
A: You have to define the type of border, so in your case I guess you want a solid border.
Here you have all css types of borders | unknown | |
d8696 | train | It looks like you need the following relations:
in your ArticlesToAuthors table:
'author' => array(self::BELONGS_TO, 'Authors', 'author_id'),
'article' => array(self::BELONGS_TO, 'Articles', 'article_id'),
and, for completeness, in your Authors table:
'articlesToAuthors' => array(self::HAS_MANY, 'ArticlesToAuthors', array('id' => 'author_id'), ),
This should allow you to access $a->author->name. Not tested, that's just off the top of my head. | unknown | |
d8697 | train | You don't necessarily need a global variable here. You can directly access the member attributes of a class by using the object itself. So in this case, you can access the table attr of the class TestApp using app.table, which would look something like this,
def select_input_file():
#...
app = TestApp(root, input_file_path)
app.place(bordermode = INSIDE,height = 500, width = 2000, x =0, y=50)
df = app.table # contains the updated table which can be passed to other functions
newFunc( df ) # sample call
A: Avoid global to achieve this.
Currently, all your stateful variables exist in the module (file). You can do the same for your table, outside of TestApp, and then pass it though __init__:
import csv
import tkinter as tk
import tkinter.ttk as tkrttk
from tkinter import *
from tkinter import filedialog
import pandas as pd
from pandastable import Table, TableModel
table = Table(showtoolbar=False, showstatusbar=False)
root = tk.Tk()
root.geometry("2000x1000")
root.title('Workshop Manager')
def select_input_file():
global input_file_path
input_file_path = filedialog.askopenfilename(
filetypes=(("CSV files", "*.csv"),))
app = TestApp(root, input_file_path)
app.place(bordermode = INSIDE,height = 500, width = 2000, x =0, y=50)
class TestApp(tk.Frame):
def __init__(self, parent, input_file_path, editable = True, enable_menus = True, table=table):
super().__init__(parent)
self.table = table
self.table.importCSV(input_file_path)
self.table.show(input_file_path)
self.table.addColumn('Current Status')
self.table.addColumn('Assign Technician')
self.table.autoResizeColumns()
root.mainloop()
Your table object is now accessible to anything that can see the module namespace. Here table is a mutable object, and all changes will be reflected to any function that accesses that object.
One suggestion: it is preferable to separate out your definitions (classes, functions) from your stateful bits that are created at run time. This will greatly help clarify dependencies. It is typical to use the line if __name__ == "__main__": at the bottom of the file to start the "script" part of your app, keeping all definitions above. I've seen some packages (looking at you, Flask!) break this convention a bit, and it can cause headaches.
On that note, your select_input_file function has a few issues. There's no good reason to create an app instance there. A better option would be to make this a method in the App class for example. | unknown | |
d8698 | train | You could use a generator:
assert all(isinstance(e, int)
for l1 in bed_data.values()
for l2 in l1 for e in l2)
It will raise an AssertionError for the first invalid value. If all values are correct, there is no choice but to test them all.
A: You don't need to try all items. Stop at the first failed test (not an instance of int) and prefer use TypeError rather than AssertionError:
import itertools
for l in bed_data.values():
for v in itertools.chain.from_iterable(l):
if not isinstance(v, int):
raise TypeError(f"Values of the dictionnary aren't lists of integers '{v}'") | unknown | |
d8699 | train | Change this:
add-adgroupmember -identity $_.Group -member $_.Accountname
To this:
add-adgroupmember -identity $user.Group -member (Get-ADUser $user.Accountname)
A: @EBGreen has answered what's wrong with your code. Just coming up with an alternative here. Instead of running the command once per member, you can try to add all members of a group at the same time. The Member parameter supports an array, so try this:
Import-Csv "C:\Scripts\Import Bulk Users into bulk groups\bulkgroups3.csv" | Group-Object Group | % {
#Foreach Group, get ADUser object for users and add members
$users = $_.Group | % { Get-ADUser $_.Accountname }
Add-ADGroupMember -Identity $_.Name -Member $users
}
EDIT I've sucessfully tested this on 2012 DC with following content in test.csv (values represent existing group name and existing samaccountname/username):
Group,Accountname
"Mytestgroup","kim_akers"
"Mytestgroup","user1"
EDIT2 It shouldn't have any problems with deeper level OUs. I tested with an OU that was 7 levels deep and it had no problem. If you have all the user inside one OU (or find the closest OU that contains all the sub-OUs), see if this script helps. Remember to replace DN for "base OU" in -Searchbase parameter.
Import-Csv "C:\Scripts\Import Bulk Users into bulk groups\bulkgroups3.csv" | Group-Object Group | % {
#Foreach Group, get ADUser object for users and add members
$users = $_.Group | % { Get-ADUser -Searchbase "OU=mybaseou,OU=test,OU=users,OU=contoso,DC=Contoso,DC=com" -Filter { samaccountname -eq $_.Accountname } }
Add-ADGroupMember -Identity $_.Name -Member $users
} | unknown | |
d8700 | train | After having a look at the http RFC, I read that the Location header is an absolute URI:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.30 | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.