_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d11301 | train | Yes use, PreTranslateMessage. If you detected the sequence that should be handled, call:
if (..) // Check if you have a message that should
// be passed to the window directly
{
TranslateMessage(pMsg);
DispatchMessage(pMsg);
return TRUE;
}
You can do this always in PreTranslateMessage, when you detect that the message should be handled by the default control, and should not be handled by any other control in the chain of windows that execute PreTranslateMessage. This is also helpful if you have a combo box open and want the Page Down/Up handled internally and not by the view or any accelerator.
A: I've handled the delete key in the PreTranslateMessage as follows:
BOOL PreTranslateMessage(MSG* pMsg)
{
if(WM_KEYDOWN == pMsg->message && VK_DELETE == pMsg->wParam)
{
int iStartChar = -1, iEndChar = -1;
GetSel(iStartChar, iEndChar);
if(iStartChar != iEndChar)
Clear(); //clear the selected text
else
{
SetSel(iStartChar, iStartChar + 1);
Clear();
}
}
return CMFCToolBarComboBoxEdit::PreTranslateMessage(pMsg);
} | unknown | |
d11302 | train | var list = new List<string> { "red", "orange"};
from c in DB.Cars
where list == null || list.Contains(c.Color)
select c; | unknown | |
d11303 | train | I submitted a ticket with the development team and this is now fixed in version 4.0.25. | unknown | |
d11304 | train | According to the Simple Form documentation, you can skip use of the wrapper html tags by using input_field instead of input. So, if this is just a one-off case then you could define your own wrapper div tag and turn off the auto-generated wrapper:
.inputFields
= f.input_field :amount, label: false, required: true
= f.submit "✓", class: "btn btn-primary"
Or, if this is not a one-off case and is a common thing, you could create your own wrapper... See the custom wrapper documentation for help. | unknown | |
d11305 | train | Typically, you do the html parsing in a view. Not sure if you are loading a view... it isn't clear whether we are working with a logged in user or not.
Let's assume we are not dealing with a logged in user and are calling
$data['title']= 'Home';
$this->load->view('include/header',$data);
$this->load->view('pages/home.php', $data);
$this->load->view('include/footer',$data);
The easiest way to do it would be to just dump the data into the view and do the layout/parsing there:
controller:
$data['title']= 'Home';
$data['rss'] = $this->get_news();
$this->load->view('include/header',$data);
$this->load->view('pages/home.php', $data);
$this->load->view('include/footer',$data);
pages/home.php
<div class="container">
<?php
echo '<ul>';
foreach ($rss as $item)
{
echo '<li>';
echo $item['title'];
echo '</li>';
echo '<li>';
echo $item['description'];
echo '</li>';
echo '<li>';
echo $item['link'];
echo '</li>';
echo '<li>';
echo $item['pubDate'];
echo '</li>';
}
echo '</ul>';
?>
</div>
controller get_news():
public function get_news()
{
// Get 6 items from GAA latest news
$this->rssparser->set_feed_url('http://www.rte.ie/rss/gaa.xml'); // get feed
$this->rssparser->set_cache_life(30); // Set cache life time in minutes
$rss = $this->rssparser->getFeed(6);
return $rss;
} | unknown | |
d11306 | train | When you rename the directories in your project to fix the Gradle build, you should modify the following files where you will need to change the old directory name to the new.
*
*.project
*.classpath
*build.gradle
*settings.gradle | unknown | |
d11307 | train | This can be done via recent versions of xclip which support the -t text/html (target selection) and pandoc to convert html to markdown.
See the details: Save HTML from clipboard as markdown text - Unix & Linux Stack Exchange
Thanks to @mountainx for asking again on the Unix stackexchange, which provided this solution, as noted in a comment above. | unknown | |
d11308 | train | your IN and NOT IN doesn't make sense.
if CPC_CLASS_SYMBOL are in the first Group they are automatocally NOT IN your second
Your WHERE clause would only give you APPLN_ID (and some more) the have these symbols and everything else is excluded. | unknown | |
d11309 | train | If most of the file is irrelevant to your application, I suggest preprocessing with your favorite scripting language or command line tool to find the relevant lines and use textscan() on that.
e.g., from a shell prompt:
grep ^I_NEED_THIS_STRING infile > outfile
in matlab:
fid = fopen('outfile');
C = textscan(fid, 'I_NEED_THIS_STRING = %f %f %f')
fclose(fid)
See the textscan documentation for more details.
A: An alternative is to use IMPORTDATA to read the entire file into a cell array of strings (with one line per cell), then use STRMATCH to find the cell that contains the string 'I_NEED_THIS_STRING', then use SSCANF to extract the 3 values from that cell:
>> data = importdata('mostly_useless_text.txt','\n'); %# Load the data
>> index = strmatch('I_NEED_THIS_STRING',data); %# Find the index of the cell
%# containing the string
>> values = sscanf(data{index},'I_NEED_THIS_STRING = %f %f %f') %# Read values
values =
1.0e+003 *
1.2345
6.7890
1.2345
If the file potentially has a lot of useless text before or after the line you are interested in, then you may use up a lot of memory in MATLAB by loading it all into a variable. You can avoid this by loading and parsing one line at a time using a loop and the function FGETS:
fid = fopen('mostly_useless_text.txt','r'); %# Open the file
newLine = fgets(fid); %# Get the first line
while newLine ~= -1 %# While EOF hasn't been reached
if strmatch('I_NEED_THIS_STRING',newLine) %# Test for a match
values = sscanf(newLine,'I_NEED_THIS_STRING = %f %f %f'); %# Read values
break %# Exit the loop
end
newLine = fgets(fid); %# Get the next line
end
fclose(fid); %# Close the file | unknown | |
d11310 | train | The code shown looks OK except that you're not supposed to call viewForOverlay directly.
The map view will call that delegate method when it needs to show the overlay.
The simplest reason it would not be calling the delegate method is that the map view's delegate property is not set.
If the delegate property is set, then verify that the coordinates are what you're expecting and also make sure they are not flipped (latitude in longitude and vice versa).
Another thing to check:
If _mapView is an IBOutlet, make sure it is actually connected to the map view in the xib. | unknown | |
d11311 | train | What framework are you using?
Isn't ToUniversalTime() the correct choice?
DateTime universalFormatDateTime = Convert.ToDateTime(dateTime).ToUniversalTime()
A: You should specify the DateTimeKind of your Date time. Add this before perform the validation:
universalFormatDateTime = DateTime
.SpecifyKind(universalFormatDateTime,DateTimeKind.Local);
A: I guess this is what you're trying to achieve:
_timeZoneInfo = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time");
var dateTime = "10/03/2013 2:12:00 AM";
DateTime universalFormatDateTime = Convert
.ToDateTime(dateTime, new CultureInfo("en-GB"))
.ToUniversalTime();
if (_timeZoneInfo.IsInvalidTime(universalFormatDateTime))
Console.WriteLine("Invalid DateTime");
else
Console.WriteLine("Valid DateTime");
You can look at the Convert.ToDateTime articke for future reference. | unknown | |
d11312 | train | You can remove all parentheses from a dataframe column using
df_Movie["Movie Name"] = df_Movie["Movie Name"].str.replace(r'[()]+', '', regex=True)
The [()]+ regex pattern matches one or more ( or ) chars.
See the regex demo. | unknown | |
d11313 | train | Your problem is that to begin with Bid{x} and Ask{x} have not been instantiated, i.e. they're null, and then you store a reference to those values, and of course the reference is null. When you then later on update Bid0 (for example), then that reference is updated, but nothing can know that this is intended to be stored within your set.
Suggest that you change your list to be an array of a fixed known size (here, 20) which will be all nulls to begin with. Then change your getter/setter accessors for the individual Bid items to actually the array internally. Then you also don't need all of those separate Bid{x}/Ask{x} variables. | unknown | |
d11314 | train | The answer is no: Safari/WebKit considers sites that share a 2nd-level domain (i.e., example.com) to be 1st-party.
We tested this on some sites hosted on our local machines using dummy domains (www.example.localdev and api.example.localdev) and Safari treated them as 3rd-party. This meant we could not use our client-side site (www) to authenticate a user via our backend (api).
However, upon moving to staging instances on the internet with actual domains (www.example.com and api.example.com) they were treated as 1st-party and everyone went home happy.
WebKit's tracking protection describes supporting the subdomain strategy:
First and third-party. If news.example is shown in the URL bar and it loads a subresource from adtech.example, then news.example is first-party and adtech.example is third-party. Note that different parties have to be different websites. sub.news.example is considered first-party when loaded under news.example because they are considered to be the same site.
But it appears they also adhere strictly to their description of a website as "a registrable domain including all of its subdomains." | unknown | |
d11315 | train | Hi i don't know if you still have the problem.
You can do php app/console doctrine:schema:update --dump-sql
In order to know the difference bewteen your model and the database.
And apply the line he will tell you.
I think it will fix the problem | unknown | |
d11316 | train | Michal Zygar is partially correct.
Make sure your -(NSInteger)tableView:(UITableView*) heightForRowAtIndexPath:(NSIndexPath*) is correctly set to the height of the view. It doesn't automatically do that for you.
The other tip I would suggest as I do it myself, is to NOT use separators. Set your separator to none, and then add in two 1px-heigh views at the top and bottom of the cell in the XIB file.
Make sure to set the autosizing for the bottom two to stick only to the bottom edge, just in case you want to change the cell's height! | unknown | |
d11317 | train | According to documentation you need to use "is null" not "null" see https://sldn.softlayer.com/article/object-filters
I noticed you are trying to get parent billing items with their children, I updated the object-mask in order to reduce retrieved data, take account that you could get errors if you are working with a large set of data. Review how-solve-error-fetching-http-headers. You can use any of the following masks:
objectMask=mask[id,parentId,category,location,associatedChildren.category]
objectMask=id;parentId;category;location;associatedChildren.category
But if you are trying to get only parent billing items, it’s not necessary to use mask unless you want data like category and location, on that case you can use following mask:
objectMask=mask[category,location]
This last will show you the information of parent item including the category and location.
You should be able to get parent billing items with their children using the following REST call.
https://<user_name>:<api_key>@api.softlayer.com/rest/v3/SoftLayer_Account/getAllBillingItems?objectFilter={"allBillingItems":{"nextBillDate":{"operation":"betweenDate","options":[{"name":"startDate","value":["03/07/2017"]},{"name":"endDate","value":["03/20/2017"]}]},"parentId":{"operation": "is null"}}}&objectMask=mask[id,parentId,category,location,associatedChildren.category]
Take account that some REST clients don’t support blank spaces between letters. And finally, If you have a lot of billing items I suggest to use result-limit feature in order to avoid time out errors.
References:
http://sldn.softlayer.com/article/rest
http://sldn.softlayer.com/article/object-Masks
https://sldn.softlayer.com/article/object-filters
http://sldn.softlayer.com/reference/services/SoftLayer_Billing_Item
I recomend you to read documentaqtion before going any further
Regards | unknown | |
d11318 | train | I've been struggling with this as well. Here's what I came up with after a lot of trail and error:
function listAccounts() {
try {
accounts = AnalyticsAdmin.AccountSummaries.list();
//Logger.log(accounts);
if (!accounts.accountSummaries || !accounts.accountSummaries.length) {
Logger.log('No accounts found.');
return;
}
Logger.log(accounts.accountSummaries.length + ' accounts found');
for (let i = 0; i < accounts.accountSummaries.length; i++) {
const account = accounts.accountSummaries[i];
Logger.log('** Account: name "%s", displayName "%s".', account.name, account.displayName);
if (account.propertySummaries) {
properties = AnalyticsAdmin.Properties.list({filter: 'parent:' + account.account});
if (properties.properties !== null) {
for (let j = 0 ; j < properties.properties.length ; j++) {
var propertyID = properties.properties[j].name.replace('properties/','');
Logger.log("GA4 Property: " + properties.properties[j].displayName + '(' + propertyID + ')')
}
}
} else {
var accountID = account.account.replace('accounts/','');
var webProperties = Analytics.Management.Webproperties.list(accountID);
for (var j = 0; j < webProperties.items.length; j++) {
Logger.log("UA Property: " + webProperties.items[j].name + '(' + webProperties.items[j].id + ')');
var profiles = Analytics.Management.Profiles.list(accountID, webProperties.items[j].id);
for (var k = 0; k < profiles.items.length; k++) {
Logger.log('Profile:' + profiles.items[k].name);
}
}
}
}
} catch (e) {
// TODO (Developer) - Handle exception
Logger.log('Failed with error: %s', e.error);
}
}
Not saying it's the right way...but it does appear to be able to pull all of my UA and GA4 properties.
If you only want GA4 properties then you can leave out the else on "if (account.propertySummaries)". | unknown | |
d11319 | train | If it's always adding the same amount of data, it may make sense to reopen it. You might want to find out the length before you open it, and then round down to the whole number of "sample sets" available, just in case you catch it while it's still writing the data. That may mean you read less than you could read (if the write finishes between you checking the length and starting the read) but you'll catch up next time.
You'll need to make sure you use appropriate sharing options so that the writer can still write while you're reading though. (The writer will probably have to have been written with this in mind too.)
A: Can you use MemoryMappedFiles?
If you can, mapping the file in memory and sharing it between processes you will be able to read the data by simply incrementing the offset for your pointer each time.
If you combine it with an event you can signal your reader when he can go in an read the information. There will be no need to block anything as the reader will always read "old" data which has already been written.
A: I would recommend using pipes, they act just like files, except stream data directly between applications, even if the apps run on different PCs (though this is really only an option if you are able to change both applications). Check it out under the "System.IO.Pipes" namespace.
P.S. You would use a "named" pipe for this (pipes are supported in 'c' as well, so basically any half decent programming language should be able to implement them)
A: I think that (a) is the best because:
*
*Current Position will be incremented as you read and you don't need to worry about to store it somewhere;
*You don't need to open it and seek required position (it shouldn't be much slower to reopen but keeping it open gives OS some hints for optimization I believe) each time you poll it;
*Other solutions I can think out requires PInvokes to system interprocess synchronisation primitives. And they won't be faster than file operations already in framework.
You just need to set proper FileShare flags:
Just for example:
Server:
using(var writer = new BinaryWriter(new FileStream(@"D:\testlog.log", FileMode.Append, FileAccess.Write, FileShare.Read)))
{
int n;
while(Int32.TryParse(Console.ReadLine(), out n))
{
writer.Write(n);
writer.Flush(); // write cached bytes to file
}
}
Client:
using (var reader = new BinaryReader(new FileStream(@"D:\testlog.log", FileMode.Open, FileAccess.Read, FileShare.ReadWrite)))
{
string s;
while (Console.ReadLine() != "exit")
{
// allocate buffer for new ints
Int32[] buffer = new Int32[(reader.BaseStream.Length - reader.BaseStream.Position) / sizeof(Int32)];
Console.WriteLine("Stream length: {0}", reader.BaseStream.Length);
Console.Write("Ints read: ");
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = reader.ReadInt32();
Console.Write((i == 0 ? "" : ", ") + buffer[i].ToString());
}
Console.WriteLine();
}
}
A: you could also stream the data into a database, rather than a file as another alternative, then you wouldn't have to worry about file locking.
but if you're stuck with the file method, you may want to close the file each time you read data from it; it depends alot on how complicated the process writing to the file is going to be, and whether it can detect a file locking operation and respond appropriately without crashing horribly. | unknown | |
d11320 | train | I believe you're asking for ng-class.
you need to set a variable to represent 'is_active', and use it in your html like so:
<i class="btn fa" ng-class="{'fa-toggle-on isActive' : is_active,
'fa-toggle-off isInactive' : !is_active}"></i>
A: Thanks for your time and answers.
I found the following and get my work done.
Using ng-if inside ng-repeat?
So my code now looks like this
<td ng-if="client.clientStatus == 0"><i class="btn fa fa-toggle-off isInactive"></i></td>
<td ng-if="client.clientStatus == 1"><i class="btn fa fa-toggle-on isActive"></i></td> | unknown | |
d11321 | train | I use lazily constructed, auto-updating collections:
public class BasketModelView
{
private readonly Lazy<ObservableCollection<AppleModelView>> _appleViews;
public BasketModelView(BasketModel basket)
{
Func<AppleModel, AppleModelView> viewModelCreator = model => new AppleModelView(model);
Func<ObservableCollection<AppleModelView>> collectionCreator =
() => new ObservableViewModelCollection<AppleModelView, AppleModel>(basket.Apples, viewModelCreator);
_appleViews = new Lazy<ObservableCollection<AppleModelView>>(collectionCreator);
}
public ObservableCollection<AppleModelView> Apples
{
get
{
return _appleViews.Value;
}
}
}
Using the following ObservableViewModelCollection<TViewModel, TModel>:
namespace Client.UI
{
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Collections.Specialized;
using System.Diagnostics.Contracts;
using System.Linq;
public class ObservableViewModelCollection<TViewModel, TModel> : ObservableCollection<TViewModel>
{
private readonly ObservableCollection<TModel> _source;
private readonly Func<TModel, TViewModel> _viewModelFactory;
public ObservableViewModelCollection(ObservableCollection<TModel> source, Func<TModel, TViewModel> viewModelFactory)
: base(source.Select(model => viewModelFactory(model)))
{
Contract.Requires(source != null);
Contract.Requires(viewModelFactory != null);
this._source = source;
this._viewModelFactory = viewModelFactory;
this._source.CollectionChanged += OnSourceCollectionChanged;
}
protected virtual TViewModel CreateViewModel(TModel model)
{
return _viewModelFactory(model);
}
private void OnSourceCollectionChanged(object sender, NotifyCollectionChangedEventArgs e)
{
switch (e.Action)
{
case NotifyCollectionChangedAction.Add:
for (int i = 0; i < e.NewItems.Count; i++)
{
this.Insert(e.NewStartingIndex + i, CreateViewModel((TModel)e.NewItems[i]));
}
break;
case NotifyCollectionChangedAction.Move:
if (e.OldItems.Count == 1)
{
this.Move(e.OldStartingIndex, e.NewStartingIndex);
}
else
{
List<TViewModel> items = this.Skip(e.OldStartingIndex).Take(e.OldItems.Count).ToList();
for (int i = 0; i < e.OldItems.Count; i++)
this.RemoveAt(e.OldStartingIndex);
for (int i = 0; i < items.Count; i++)
this.Insert(e.NewStartingIndex + i, items[i]);
}
break;
case NotifyCollectionChangedAction.Remove:
for (int i = 0; i < e.OldItems.Count; i++)
this.RemoveAt(e.OldStartingIndex);
break;
case NotifyCollectionChangedAction.Replace:
// remove
for (int i = 0; i < e.OldItems.Count; i++)
this.RemoveAt(e.OldStartingIndex);
// add
goto case NotifyCollectionChangedAction.Add;
case NotifyCollectionChangedAction.Reset:
Clear();
for (int i = 0; i < e.NewItems.Count; i++)
this.Add(CreateViewModel((TModel)e.NewItems[i]));
break;
default:
break;
}
}
}
}
A: Well first of all, I don't think there is a single "right way" to do this. It depends entirely on your application. There are more correct ways and less correct ways.
That much being said, I am wondering why you would need to keep these collections "in sync." What scenario are you considering that would make them go out of sync? If you look at the sample code from Josh Smith's MSDN article on M-V-VM, you will see that the majority of the time, the Models are kept in sync with the ViewModels simply because every time a Model is created, a ViewModel is also created. Like this:
void CreateNewCustomer()
{
Customer newCustomer = Customer.CreateNewCustomer();
CustomerViewModel workspace = new CustomerViewModel(newCustomer, _customerRepository);
this.Workspaces.Add(workspace);
this.SetActiveWorkspace(workspace);
}
I am wondering, what prevents you from creating an AppleModelView every time you create an Apple? That seems to me to be the easiest way of keeping these collections "in sync," unless I have misunderstood your question.
A: You can find an example (and explanations) here too : http://blog.lexique-du-net.com/index.php?post/2010/03/02/M-V-VM-How-to-keep-collections-of-ViewModel-and-Model-in-sync
Hope this help
A: I may not exactly understand your requirements however the way I have handled a similar situation is to use CollectionChanged event on the ObservableCollection and simply create/destroy the view models as required.
void OnApplesCollection_CollectionChanged(object sender, NotifyCollectionChangedEventArgs e)
{
// Only add/remove items if already populated.
if (!IsPopulated)
return;
Apple apple;
switch (e.Action)
{
case NotifyCollectionChangedAction.Add:
apple = e.NewItems[0] as Apple;
if (apple != null)
AddViewModel(asset);
break;
case NotifyCollectionChangedAction.Remove:
apple = e.OldItems[0] as Apple;
if (apple != null)
RemoveViewModel(apple);
break;
}
}
There can be some performance issues when you add/remove a lot of items in a ListView.
We have solved this by: Extending the ObservableCollection to have an AddRange, RemoveRange, BinaryInsert methods and adding events that notify others the collection is being changed. Together with an extended CollectionViewSource that temporary disconnects the source when the collection is changed it works nicely.
HTH,
Dennis
A: The «Using MVVM to provide undo/redo. Part 2: Viewmodelling lists» article provides the MirrorCollection<V, D> class to achieve the view-model and model collections synchronization.
Additional references
*
*Original link (currently, it is not available): Notify Changed » Blog Archive » Using MVVM to provide undo/redo. Part 2: Viewmodelling lists.
A: OK I have a nerd crush on this answer so I had to share this abstract factory I added to it to support my ctor injection.
using System;
using System.Collections.ObjectModel;
namespace MVVM
{
public class ObservableVMCollectionFactory<TModel, TViewModel>
: IVMCollectionFactory<TModel, TViewModel>
where TModel : class
where TViewModel : class
{
private readonly IVMFactory<TModel, TViewModel> _factory;
public ObservableVMCollectionFactory( IVMFactory<TModel, TViewModel> factory )
{
this._factory = factory.CheckForNull();
}
public ObservableCollection<TViewModel> CreateVMCollectionFrom( ObservableCollection<TModel> models )
{
Func<TModel, TViewModel> viewModelCreator = model => this._factory.CreateVMFrom(model);
return new ObservableVMCollection<TViewModel, TModel>(models, viewModelCreator);
}
}
}
Which builds off of this:
using System.Collections.ObjectModel;
namespace MVVM
{
public interface IVMCollectionFactory<TModel, TViewModel>
where TModel : class
where TViewModel : class
{
ObservableCollection<TViewModel> CreateVMCollectionFrom( ObservableCollection<TModel> models );
}
}
And this:
namespace MVVM
{
public interface IVMFactory<TModel, TViewModel>
{
TViewModel CreateVMFrom( TModel model );
}
}
And here is the null checker for completeness:
namespace System
{
public static class Exceptions
{
/// <summary>
/// Checks for null.
/// </summary>
/// <param name="thing">The thing.</param>
/// <param name="message">The message.</param>
public static T CheckForNull<T>( this T thing, string message )
{
if ( thing == null ) throw new NullReferenceException(message);
return thing;
}
/// <summary>
/// Checks for null.
/// </summary>
/// <param name="thing">The thing.</param>
public static T CheckForNull<T>( this T thing )
{
if ( thing == null ) throw new NullReferenceException();
return thing;
}
}
}
A: While Sam Harwell's solution is pretty good already, it is subject to two problems:
*
*The event handler that is registered here this._source.CollectionChanged += OnSourceCollectionChanged is never unregistered, i.e. a this._source.CollectionChanged -= OnSourceCollectionChanged is missing.
*If event handlers are ever attached to events of view models generated by the viewModelFactory, there is no way of knowing when these event handlers may be detached again. (Or generally speaking: You cannot prepare the generated view models for "destruction".)
Therefore I propose a solution that fixes both (short) shortcomings of Sam Harwell's approach:
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Collections.Specialized;
using System.Diagnostics.Contracts;
using System.Linq;
namespace Helpers
{
public class ObservableViewModelCollection<TViewModel, TModel> : ObservableCollection<TViewModel>
{
private readonly Func<TModel, TViewModel> _viewModelFactory;
private readonly Action<TViewModel> _viewModelRemoveHandler;
private ObservableCollection<TModel> _source;
public ObservableViewModelCollection(Func<TModel, TViewModel> viewModelFactory, Action<TViewModel> viewModelRemoveHandler = null)
{
Contract.Requires(viewModelFactory != null);
_viewModelFactory = viewModelFactory;
_viewModelRemoveHandler = viewModelRemoveHandler;
}
public ObservableCollection<TModel> Source
{
get { return _source; }
set
{
if (_source == value)
return;
this.ClearWithHandling();
if (_source != null)
_source.CollectionChanged -= OnSourceCollectionChanged;
_source = value;
if (_source != null)
{
foreach (var model in _source)
{
this.Add(CreateViewModel(model));
}
_source.CollectionChanged += OnSourceCollectionChanged;
}
}
}
private void OnSourceCollectionChanged(object sender, NotifyCollectionChangedEventArgs e)
{
switch (e.Action)
{
case NotifyCollectionChangedAction.Add:
for (int i = 0; i < e.NewItems.Count; i++)
{
this.Insert(e.NewStartingIndex + i, CreateViewModel((TModel)e.NewItems[i]));
}
break;
case NotifyCollectionChangedAction.Move:
if (e.OldItems.Count == 1)
{
this.Move(e.OldStartingIndex, e.NewStartingIndex);
}
else
{
List<TViewModel> items = this.Skip(e.OldStartingIndex).Take(e.OldItems.Count).ToList();
for (int i = 0; i < e.OldItems.Count; i++)
this.RemoveAt(e.OldStartingIndex);
for (int i = 0; i < items.Count; i++)
this.Insert(e.NewStartingIndex + i, items[i]);
}
break;
case NotifyCollectionChangedAction.Remove:
for (int i = 0; i < e.OldItems.Count; i++)
this.RemoveAtWithHandling(e.OldStartingIndex);
break;
case NotifyCollectionChangedAction.Replace:
// remove
for (int i = 0; i < e.OldItems.Count; i++)
this.RemoveAtWithHandling(e.OldStartingIndex);
// add
goto case NotifyCollectionChangedAction.Add;
case NotifyCollectionChangedAction.Reset:
this.ClearWithHandling();
if (e.NewItems == null)
break;
for (int i = 0; i < e.NewItems.Count; i++)
this.Add(CreateViewModel((TModel)e.NewItems[i]));
break;
default:
break;
}
}
private void RemoveAtWithHandling(int index)
{
_viewModelRemoveHandler?.Invoke(this[index]);
this.RemoveAt(index);
}
private void ClearWithHandling()
{
if (_viewModelRemoveHandler != null)
{
foreach (var item in this)
{
_viewModelRemoveHandler(item);
}
}
this.Clear();
}
private TViewModel CreateViewModel(TModel model)
{
return _viewModelFactory(model);
}
}
}
To deal with the first of the two problems, you can simply set Source to null in order to get rid of the CollectionChanged event handler.
To deal with the second of the two problems, you can simply add a viewModelRemoveHandler that allows to to "prepare your object for destruction", e.g. by removing any event handlers attached to it.
A: I've written some helper classes for wrapping observable collections of business objects in their View Model counterparts here
A: I really like 280Z28's solution. Just one remark. Is it necessary to do the loops for each NotifyCollectionChangedAction? I know that the docs for the actions state "one or more items" but since ObservableCollection itself does not support adding or removing ranges, this can never happen I would think.
A: Resetting an collection to a default value or to match a target value is something i've hit quite frequently
i Wrote a small helper class of Miscilanious methods that includes
public static class Misc
{
public static void SyncCollection<TCol,TEnum>(ICollection<TCol> collection,IEnumerable<TEnum> source, Func<TCol,TEnum,bool> comparer, Func<TEnum, TCol> converter )
{
var missing = collection.Where(c => !source.Any(s => comparer(c, s))).ToArray();
var added = source.Where(s => !collection.Any(c => comparer(c, s))).ToArray();
foreach (var item in missing)
{
collection.Remove(item);
}
foreach (var item in added)
{
collection.Add(converter(item));
}
}
public static void SyncCollection<T>(ICollection<T> collection, IEnumerable<T> source, EqualityComparer<T> comparer)
{
var missing = collection.Where(c=>!source.Any(s=>comparer.Equals(c,s))).ToArray();
var added = source.Where(s => !collection.Any(c => comparer.Equals(c, s))).ToArray();
foreach (var item in missing)
{
collection.Remove(item);
}
foreach (var item in added)
{
collection.Add(item);
}
}
public static void SyncCollection<T>(ICollection<T> collection, IEnumerable<T> source)
{
SyncCollection(collection,source, EqualityComparer<T>.Default);
}
}
which covers most of my needs
the first would probably be most applicable as your also converting types
note: this only Syncs the elements in the collection not the values inside them
A: This is a slight variation on Sam Harwell's answer, implementing IReadOnlyCollection<> and INotifyCollectionChanged instead of inheriting from ObservableCollection<> directly. This prevents consumers from modifying the collection, which wouldn't generally be desired in this scenario.
This implementation also uses CollectionChangedEventManager to attach the event handler to the source collection to avoid a memory leak if the source collection is not disposed at the same time as the mirrored collection.
/// <summary>
/// A collection that mirrors an <see cref="ObservableCollection{T}"/> source collection
/// with a transform function to create it's own elements.
/// </summary>
/// <typeparam name="TSource">The type of elements in the source collection.</typeparam>
/// <typeparam name="TDest">The type of elements in this collection.</typeparam>
public class MappedObservableCollection<TSource, TDest>
: IReadOnlyCollection<TDest>, INotifyCollectionChanged
{
/// <inheritdoc/>
public int Count => _mappedCollection.Count;
/// <inheritdoc/>
public event NotifyCollectionChangedEventHandler CollectionChanged {
add { _mappedCollection.CollectionChanged += value; }
remove { _mappedCollection.CollectionChanged -= value; }
}
private readonly Func<TSource, TDest> _elementMapper;
private readonly ObservableCollection<TDest> _mappedCollection;
/// <summary>
/// Initializes a new instance of the <see cref="MappedObservableCollection{TSource, TDest}"/> class.
/// </summary>
/// <param name="sourceCollection">The source collection whose elements should be mapped into this collection.</param>
/// <param name="elementMapper">Function to map elements from the source collection to this collection.</param>
public MappedObservableCollection(ObservableCollection<TSource> sourceCollection, Func<TSource, TDest> elementMapper)
{
if (sourceCollection == null) throw new ArgumentNullException(nameof(sourceCollection));
_mappedCollection = new ObservableCollection<TDest>(sourceCollection.Select(elementMapper));
_elementMapper = elementMapper ?? throw new ArgumentNullException(nameof(elementMapper));
// Update the mapped collection whenever the source collection changes
// NOTE: Use the weak event pattern here to avoid a memory leak
// See: https://learn.microsoft.com/en-us/dotnet/framework/wpf/advanced/weak-event-patterns
CollectionChangedEventManager.AddHandler(sourceCollection, OnSourceCollectionChanged);
}
/// <inheritdoc/>
IEnumerator<TDest> IEnumerable<TDest>.GetEnumerator()
=> _mappedCollection.GetEnumerator();
/// <inheritdoc/>
IEnumerator IEnumerable.GetEnumerator()
=> _mappedCollection.GetEnumerator();
/// <summary>
/// Mirror a change event in the source collection into the internal mapped collection.
/// </summary>
private void OnSourceCollectionChanged(object sender, NotifyCollectionChangedEventArgs e)
{
switch (e.Action) {
case NotifyCollectionChangedAction.Add:
InsertItems(e.NewItems, e.NewStartingIndex);
break;
case NotifyCollectionChangedAction.Remove:
RemoveItems(e.OldItems, e.OldStartingIndex);
break;
case NotifyCollectionChangedAction.Replace:
RemoveItems(e.OldItems, e.OldStartingIndex);
InsertItems(e.NewItems, e.NewStartingIndex);
break;
case NotifyCollectionChangedAction.Reset:
_mappedCollection.Clear();
InsertItems(e.NewItems, 0);
break;
case NotifyCollectionChangedAction.Move:
if (e.OldItems.Count == 1) {
_mappedCollection.Move(e.OldStartingIndex, e.NewStartingIndex);
} else {
RemoveItems(e.OldItems, e.OldStartingIndex);
var movedItems = _mappedCollection.Skip(e.OldStartingIndex).Take(e.OldItems.Count).GetEnumerator();
for (int i = 0; i < e.OldItems.Count; i++) {
_mappedCollection.Insert(e.NewStartingIndex + i, movedItems.Current);
movedItems.MoveNext();
}
}
break;
}
}
private void InsertItems(IList newItems, int newStartingIndex)
{
for (int i = 0; i < newItems.Count; i++)
_mappedCollection.Insert(newStartingIndex + i, _elementMapper((TSource)newItems[i]));
}
private void RemoveItems(IList oldItems, int oldStartingIndex)
{
for (int i = 0; i < oldItems.Count; i++)
_mappedCollection.RemoveAt(oldStartingIndex);
}
} | unknown | |
d11322 | train | Here the separator ---- in GROUP_CONCAT function is creating an issue.
If you use a different separator like ==== then the issue will be resolved.
Like,
$stmp = "(SELECT GROUP_CONCAT(comment SEPARATOR '====' ) FROM mgmx_sales_flat_invoice_comment a WHERE a.parent_id = `main_table`.`entity_id` group by parent_id)";
Hope this helps! | unknown | |
d11323 | train | The error message with 2.11 is more explanatory:
scala> l map { (b, n) => b + n }
<console>:9: error: missing parameter type
Note: The expected type requires a one-argument function accepting a 2-Tuple.
Consider a pattern matching anonymous function, `{ case (b, n) => ... }`
l map { (b, n) => b + n }
^
<console>:9: error: missing parameter type
l map { (b, n) => b + n }
^
For an apply, you get "auto-tupling":
scala> def f(p: (Int, Int)) = p._1 + p._2
f: (p: (Int, Int))Int
scala> f(1,2)
res0: Int = 3
where you supplied two args instead of one.
But you don't get auto-untupling.
People have always wanted it to work that way.
A: This situation can be understand with the types of inner function.
First, the type syntax of parameter function for the map function is as follows.
Tuple2[Int,Int] => B //Function1[Tuple2[Int, Int], B]
The first parameter function is expand to this.
(t:(Int,Int)) => t._1 + t._2 // type : Tuple2[Int,Int] => Int
This is ok. Then the second function.
(t:(Int, Int)) => t match {
case (a:Int, b:Int) => a + b
}
This is also ok. In the failure scenario,
(a:Int, b:Int) => a + b
Lets check the types of the function
(Int, Int) => Int // Function2[Int, Int, Int]
So the parameter function type is wrong.
As a solution, you can convert multiple arity functions to tuple mode and backward with the helper functions in Function object. You can do following.
val l = Seq(("un", ""), ("deux", "hehe"), ("trois", "lol"))
l map(Function.tupled((b, n) => b + n ))
Please refer Function API for further information.
A: The type of a function argument passed to map function applied to a sequence is inferred by the type of elements in the sequence. In particular,
scenario 1: l map { t => t._1 + t._2 } is same as l map { t: ((String, String)): (String) => t._1 + t._2 } but shorter, which is possible because of type inference. Scala compiler automatically inferred the type of the argument to be (String, String) => String
scenario 2: you can also write in longer form
l map { t => t match {
case(b, n) => b + n
}
}
scenario 3: a function of wrong type is passed to map, which is similar to
def f1 (a: String, b: String) = a + b
def f2 (t: (String, String)) = t match { case (a, b) => a + b }
l map f1 // won't work
l map f2 | unknown | |
d11324 | train | I found description of my problem in next article
Impersonation does not work with UserProfileManager
As a reason you can clear HttpContext each time you get or set user profile properties. For example next code works fin for me.
SPSecurity.RunWithElevatedPrivileges(delegate()
{
HttpContext tempCtx = HttpContext.Current;
HttpContext.Current = null;
UserProfile userProfile = GetUserProfile(user);
userProfile["SomeProperty"].Value = points;
userProfile.Commit();
HttpContext.Current = tempCtx;
}); | unknown | |
d11325 | train | Found the bug. I was debugging the package itself and I needed to add the image to the assets found in the package. | unknown | |
d11326 | train | See if this question may be of help to you:
Is there TryResolve in Unity? | unknown | |
d11327 | train | First of all thats not how you add variables using template literals you can read more about it here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals
Second why do you query it again when you've just made the element you can use card as reference and if you need something within it, its much easier to access it using the variable you already have other than looking for it in your document
Maybe something like this but its hard to tell withouth more code etc
button.addEventListener('click', resp => {
count = count +1;
var card = document.createElement('card');
card.innerHTML = `
<img src="..." class="card-img-top" alt="...">
<div class="card-body">
**<h5 class="card_title${count}"></h5>
<h6 class="temp${count}"></h6>
<p class="card-text${count}"></p>**
<a href="#" class="btn-primary"></a>
</div>
`;
card.className = 'card';
var content = document.getElementById('id1');
content.appendChild(card);
var citysName = card.querySelector('.card_title'+count);
var description = card.querySelector('.card-text'+count);
var temp = card.querySelector('.temp'+count);
fetch('https://api.openweathermap.org/data/2.5/weather?q='+inputVal.value+'&appid=a5599c020b0d897cbc8b52d547289acc')
.then(post => post.json())
.then(data => {
var cityName = data['name'];
var temper = data['main']['temp'];
var descrip = data['weather'][0]['description'];
let ctemp = Math.round(temper-273);
citysName.innerHTML = cityName;
temp.innerHTML = ctemp + "°C";
description.innerHTML = descrip;
})
}) | unknown | |
d11328 | train | What have you tried? Because as it stands it sounds like all you need to do is
Module Module1
Sub Main()
fnc_CriaContas_Email_Lote()
End Sub
Sub fnc_CriaContas_Email_Lote()
' Do something.
End Sub
End Module
If "fnc_CriaContas_Email_Lote" is a class then you might have to do something like:
Module Module1
Sub Main()
dim email as new cria_contas_lote()
email.fnc_CriaContas_Email_Lote()
End Sub
End Module
Without seeing the cria_contas_lote file its hard to know.
Edit: Below is how you can call it all from just the module
Imports System.Data.OleDb
Module Module1
Sub Main()
fnc_CriaContas_Email_Lote()
End Sub
Sub fnc_CriaContas_Email_Lote()
Dim oPainelWS As PainelControle.svcSmarterMail
Dim sRetorno As String = ""
Try
'oPainelWS = New PainelControle.svcSmarterMail("xxx.xxx.xxx.xxx")
Catch ex As Exception
Console.WriteLine("Erro ao efetuar a conexão no servidor remoto: " & ex.Message)
Exit Sub
End Try
Dim sNomeArquivo As String = "C:\dir\emails.xlsx"
Dim sSQL As String = ""
Dim stringExcel As String = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & sNomeArquivo & ";Extended Properties=Excel 12.0"
Dim oExcel As New OleDbConnection(stringExcel)
Try
oExcel.Open()
Catch ex As Exception
Console.Write("O arquivo não foi localizado ou ocorreu um erro de abertura no servidor. Arquivo: " & sNomeArquivo)
Console.Write(vbCrLf & "================================================")
Console.Write(vbCrLf & ex.Message)
Console.Write(vbCrLf & "================================================")
Exit Sub
End Try
Dim oDataSet As New DataSet
Try
Dim oExcelAdapter As New OleDbDataAdapter("select * from [contas_pop$]", oExcel)
oExcelAdapter.Fill(oDataSet, "conteudo")
Catch ex As Exception
Console.Write("A tabela CONTAS_POP não foi localizada. Renomeie sua WorkSheet para CONTAS_POP")
oExcel.Close()
Exit Sub
End Try
oExcel.Close()
Dim oDataview As DataView = oDataSet.Tables("conteudo").DefaultView
Dim lTotal As Long = 0
Dim lErro As Long = 0
Dim oLinha As DataRow
Dim iTamanhoCaixa As Integer = 1024
Dim sComCopia As String
For Each oLinha In oDataSet.Tables("conteudo").Rows
If Not (Trim(oLinha("conta").ToString) = "") Then
Console.Write("Criando [" & Trim(oLinha("conta").ToString) & "]...")
sRetorno = ""
sComCopia = Trim(oLinha("enviar_copia").ToString)
iTamanhoCaixa = oLinha("tamanho_mb")
sRetorno = CriaContaPOP(Trim(oLinha("conta").ToString), Trim(oLinha("apelidos").ToString), Trim(oLinha("password").ToString), iTamanhoCaixa, oLinha("nome").ToString, sComCopia, "admin", "password")
'sRetorno = oPainelWS.CriaContaPOP(oLinha("conta"), Trim(oLinha("apelidos").ToString), oLinha("senha"), iTamanhoCaixa, "", sComCopia, "", "")
Console.WriteLine("Retorno: " & sRetorno)
'If Not (sRetorno = "OK") Then
'Exit Sub
'End If
Threading.Thread.Sleep(100)
End If
Next
End Sub
Public Function CriaContaPOP(ByVal sConta As String, ByVal sApelidos As String, ByVal sSenha As String, ByVal iTamanhoCaixaKB As String, ByVal sNome As String, ByVal sForwardTo As String, ByVal sAdminUsuario As String, ByVal sAdminSenha As String) As String
If Not (iTamanhoCaixaKB > 1) Then
Return "ERRO: Tamanho da caixa postal não pode ser inferior a 1 KB"
End If
Dim aContaNome As String() = Split(sConta, "@")
Dim sContaNome As String = ""
Dim sDominio As String = ""
sContaNome = aContaNome(0)
sDominio = aContaNome(1)
Dim oUsuarios As New svcUserAdmin
Dim oUsuarioInfo As New SettingsRequestResult
Dim oResultado As New GenericResult
oResultado = oUsuarios.AddUser2(sAdminUsuario, sAdminSenha, sContaNome, sSenha, sDominio, sNome, "", False, iTamanhoCaixaKB)
If (oResultado.Result = False) Then
Return "ERRO: Não foi possivel incluir a conta de e-mail: " & oResultado.Message
End If
If Not (sForwardTo.ToString = "") Then
Dim arrInfo(0) As String
arrInfo(0) = "forwardaddress=" & sForwardTo.ToString
oResultado = oUsuarios.SetRequestedUserSettings(sAdminUsuario, sAdminSenha, sConta, arrInfo)
If (oResultado.Result = False) Then
Return "ERRO: Não foi possivel incluir a conta de e-mail: " & oResultado.Message
End If
End If
Return "OK"
End Function
End Module
Your issue is your missing the following types:
*
*PainelControle.svcSmarterMail
*svcUserAdmin
*SettingsRequestResult
*GenericResult
These are not built in .Net types and must be defined in either another file. Once you've found the missing classes just add them into the project and you should be good to go.
A: I think you want to make Sub Main run ENTIRE function, and DON'T EXIT as sub main is executed.
I have written answer of your question here : VB.net program with no UI
Sub Main()
'Write whatever you want, and add this code at the END:
Application.Run
End Sub | unknown | |
d11329 | train | Change your code to this and it will work:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.2/jquery.min.js"></script>
<script type="text/javascript">
$(function(){
var loadE = "<img src='data:image/gif;base64,R0lGODlhEAAQAPIAAP////8AAP7Cwv5CQv8AAP5iYv6Cgv6SkiH/C05FVFNDQVBFMi4wAwEAAAAh/h1CdWlsdCB3aXRoIEdJRiBNb3ZpZSBHZWFyIDQuMAAh/hVNYWRlIGJ5IEFqYXhMb2FkLmluZm8AIfkECQoAAAAsAAAAABAAEAAAAzMIutz+MMpJaxNjCDoIGZwHTphmCUWxMcK6FJnBti5gxMJx0C1bGDndpgc5GAwHSmvnSAAAIfkECQoAAAAsAAAAABAAEAAAAzQIutz+TowhIBuEDLuw5opEcUJRVGAxGSBgTEVbGqh8HLV13+1hGAeAINcY4oZDGbIlJCoSACH5BAkKAAAALAAAAAAQABAAAAM2CLoyIyvKQciQzJRWLwaFYxwO9BlO8UlCYZircBzwCsyzvRzGqCsCWe0X/AGDww8yqWQan78EACH5BAkKAAAALAAAAAAQABAAAAMzCLpiJSvKMoaR7JxWX4WLpgmFIQwEMUSHYRwRqkaCsNEfA2JSXfM9HzA4LBqPyKRyOUwAACH5BAkKAAAALAAAAAAQABAAAAMyCLpyJytK52QU8BjzTIEMJnbDYFxiVJSFhLkeaFlCKc/KQBADHuk8H8MmLBqPyKRSkgAAIfkECQoAAAAsAAAAABAAEAAAAzMIuiDCkDkX43TnvNqeMBnHHOAhLkK2ncpXrKIxDAYLFHNhu7A195UBgTCwCYm7n20pSgAAIfkECQoAAAAsAAAAABAAEAAAAzIIutz+8AkR2ZxVXZoB7tpxcJVgiN1hnN00loVBRsUwFJBgm7YBDQTCQBCbMYDC1s6RAAAh+QQJCgAAACwAAAAAEAAQAAADMgi63P4wykmrZULUnCnXHggIwyCOx3EOBDEwqcqwrlAYwmEYB1bapQIgdWIYgp5bEZAAADsAAAAAAAAAAAA=' alt='Loading' />Loading..";
$("#myDiv").click(function(){
$(this).html(loadE);
$(this).load('http://domain/page2 #idContent');
});
});
</script>
<div id="myDiv">OPEN</div> | unknown | |
d11330 | train | Try changing the .vscode/settings.json to add your "locals" path:
{
"i18n-ally.localesPaths": ["src/locales"],
"i18n-ally.sourceLanguage": "english",
}
If this does not work, try to add a defaultNamespace to your language file:
Example:
en.json
{
"translation": {
"login": {
"title": "Welcome!",
"user": "User",
"password": "Password",
},
}
}
Implementation:
const {t} = useTranslation();
const title = title: t('login.title')
.vscode/settings.json
{
"i18n-ally.localesPaths": ["src/utils/language"],
"i18n-ally.defaultNamespace": "translation",
"i18n-ally.sourceLanguage": "english",
"i18n-ally.keystyle": "nested"
} | unknown | |
d11331 | train | I think you need something like that. You have mentioned that you need to update if the invoice number exists and you also want to a new item with this invoice number. If you have any query, feel free to ask me
public function update(Request $request, $inv_no)
{
$data = $request->all();
$stocks = Stock::where('inv_no', $inv_no)->get();
$i = 0;
foreach ($stocks as $stock) {
$stock->update([
'pid' => $request->pid[$i],
'qty' => $request->qty[$i],
'user_id' => Auth::user()->id,
'Indate'=>$request->Indate,
'supplierName' => $request->supplierName,
'receiptNumber' => $request->receiptNumber,
'truckNumber' => $request->truckNumber,
'driverName' => $request->driverName,
'remark' => $request->remark,
]);
$i++;
}
$totalPid = count($request->pid);
$totalQty= count($request->qty);
Stock::create([
'pid' => $request->pid[$totalPid - 1],
'qty' => $request->qty[$totalQty - 1],
'inv_no' => $request->inv_no,
'user_id' => Auth::user()->id,
'Indate'=>$request->Indate,
'supplierName' => $request->supplierName,
'receiptNumber' => $request->receiptNumber,
'truckNumber' => $request->truckNumber,
'driverName' => $request->driverName,
'remark' => $request->remark,
]);
return $this->index();
}
A: You may use updateOrCreate method:
foreach ($stocks as $stock) {
Stock::updateOrCreate(['id' => $stock->id], [
'pid' => $request->pid[$i],
'qty' => $request->qty[$i],
'inv_no' => $request->inv_no,
'user_id' => Auth::user()->id,
'Indate'=>$request->Indate,
'supplierName' => $request->supplierName,
'receiptNumber' => $request->receiptNumber,
'truckNumber' => $request->truckNumber,
'driverName' => $request->driverName,
'remark' => $request->remark,
]);
}
See Laravel docs for more info.
A: I can see now what you need. If you put this where(['id', $stock->id]) you will only find and update what already exists. Since your comparing with the values you searched doing $stocks = Stock::where('inv_no', $inv_no)->get();.
If you need to search where 'inv_no' is the value you inputed or create a new record, do:
Stock::updateOrCreate(['inv_no', $inv_no], [
'pid' => $request->pid[$i],
'qty' => $request->qty[$i],
'inv_no' => $request->inv_no,
'user_id' => Auth::user()->id,
'Indate'=>$request->Indate,
'supplierName' => $request->supplierName,
'receiptNumber' => $request->receiptNumber,
'truckNumber' => $request->truckNumber,
'driverName' => $request->driverName,
'remark' => $request->remark,
]);
That way you are going to search any records with 'inv_no' equals the value you entered, update it or create a new value if it doesn't exists. Also, you can remove the lines $stocks = Stock::where('inv_no', $inv_no)->get(); and foreach ($stocks as $stock) {}.
A: UpdateorCreate is what you need but you must twist it so it works for your needs.
UpdateOrCreate can take 2 array parameters. The 1st array is basically your where clause telling the function to look for the specific parameters and if it finds them to update otherwise create a new record.
Stock::updateOrCreate([
'id' => $stock->id
'pid' => $request->pid[$i],
'inv_no' => $request->inv_no],
'qty' => $request->qty[$i],
[
'user_id' => Auth::user()->id,
'Indate'=>$request->Indate,
'supplierName' => $request->supplierName,
'receiptNumber' => $request->receiptNumber,
'truckNumber' => $request->truckNumber,
'driverName' => $request->driverName,
'remark' => $request->remark,
]);
Not clear about your data but the above code will check in your database if there is a record with:
*
*id matching $stock->id
*pid matching $request->pid[$i]
*inv_no matching $request->inv_no
You can extend your where clause even further, you can even use only 1 array with all your data and tell the function to look for a record which has all the fields matching otherwise create a new record. | unknown | |
d11332 | train | Try modifying...
List<ItemWriter<MyObject>> writerList = new ArrayList<ItemWriter<MyObject>>();
...with :
List<ItemWriter<? super MyObject>> writerList = new ArrayList<ItemWriter<? super MyObject>>();
CompositeItemWriter#setDelegates takes a list in the form List<ItemWriter<? super T>>.
See spring documentation.
A: Just in case you did not find a proper solution.
I would do it like this:
public ItemWriter<MyObject> myWriter() {
ItemWriter<MyObject> myWriter = new JdbcBatchItemWriter<MyObject>(); // <-- Example item writer 1
return myWriter;
}
public ItemWriter<MyObject> myOtherWriter() {
ItemWriter<MyObject> myOtherWriter = new JdbcBatchItemWriter<MyObject>(); // <-- Example item writer 2
return myOtherWriter;
}
public CompositeItemWriter<MyObject> compositeItemWriter() {
CompositeItemWriter<MyObject> writer = new CompositeItemWriter<MyObject>();
writer.setDelegates(Arrays.asList(myWriter(),myOtherWriter())); //<-- NO ERROR HERE :)
return writer;
}
I hope that helps. | unknown | |
d11333 | train | found a solution on this page: https://getsatisfaction.com/balupton/topics/history_js_and_retrieving_state_from_a_url
"You can use this function however from one of my utility projects called jQuery Sparkle to do what you need: https://github.com/balupton/jquery-sparkle/blob/master/scripts/resources/core.string.js#L164
So you would do this:
var State = History.getState(), dataQuery = State.url. queryStringToJSON();" | unknown | |
d11334 | train | Please rename your file to "todo.jsx".
Explanation:
VSCode and other IDEs choose your parser based on the file extension. For VSCode it looks like you are creating a "normal" JavaScript file. But JavaScript does not know tags, so you get an error message.
A small addition: if you ever work with TypeScript in React, the same applies: instead of the .ts extension you should choose the .tsx extension.
A: It looks like your file is named ToDO.js. Since you're using JSX syntax, the file should be given a .jsx extension, so that VSCode knows how its syntax should be parsed (and, as a result, what errors to display).
So, rename it to ToDO.jsx. | unknown | |
d11335 | train | in place of return FALSE, you can do:
return nil;
or
return [self topViewController];
Either should have the right side effect.
That being said, be careful with your UI design here. Make sure the user knowns why the back button doesn't work somehow.
A: I don't understand why you would make the Back button ignore taps? It seems like this would confuse users and the App Store team would consider this a bug. Perhaps you could you post a screenshot?
It would probably be better to redesign your interface and consider 1) using toolbar buttons for navigation (like Mobile Safari) or 2) fully support UINavigation based views rather than working around it.
Update: It sounds like you're going to perform a different action, like displaying a confirmation? I don't know of any official ways to do what you want, since the UINavigationControllerDelegate methods just notify you about transitions, they don't let you cancel/modify them. (And if the transition is animated then playing with the navigation controller's view stack probably won't help.)
So you could always float a transparent (or almost transparent) window over the back button and intercept taps that way. Here's some sample bar that does something similar with the status bar:
https://github.com/myell0w/MTStatusBarOverlay
A: Why don't you disable the back button in the situations that you don't want the user to tap it? | unknown | |
d11336 | train | PHP's include is pretty much the exact same thing as having literally cut/pasted the raw contents of the included file at the point where the include() directive is.
Java's compiled, so there's no source code to "include" - the JVM is simply loading object/class definitions and making them available for use. It's much like the #include directives in C. you're not loading literal source code, just function definitions/prototypes/fingerprints for later use.
A: In php it simply dumps the contents of the file in the current file.
In Java, an imported class is used:
*
*For compiling the source to byte code using the imported classes.
*At runtime when the JVM sees that your program references the imported class, it loads it and uses it(for method invocations and member accesses if it is the case)
A: PHP simply just includes whatever is in that file. It's simply merging the two files together.
Java's import function gives you access to the methods specified in that import. Basically, PHP is just a rudimentary combining of the two files while Java gives you access to that file's methods and interface.
A: They are very different. Php just include the source code from the included file.
Java is using the ClassLoader to load the compiled class located somewhere in the CLASSPATH. The import just tells the compiler that you want to reference those classes in the current namespace. The import does not load anything by itself, only when you use new, the JVM will load the class.
A: You have <jsp:include> in Java similar to PHP include.
Java import is similar to PHP load module.
A: The closest to a php include in Java is a static import. I.e. something like: import static javax.servlet.http.HttpServlet. This allows you to reference methods in the same class file as if they were declared locally (this only applies for static members of the imported class. However, this is very seldom used. It's a tighter form of coupling and should be avoided in most cases. The only time I find it helpful is for Junit test cases. Doing a static import of org.junit.Assert allows you to use the shorter form assertEquals(...) instead of Assert.assertEquals(...). Check out Oracle's documentation on static imports here.
A: The main difference from my experience is that PHP allows you do do anything. You can treat PHP includes the same way as Java uses its imports. A PHP file can be all function, or it can simply execute from start to finish.
So your php file could be
<?php
echo(1 + 4)
?>
or it could include function which you call later on
<?php
function addTwoNumbers()
{
return 1 + 4;
}
?>
If you inccluded the second php file you could call the addTwoNumbers function below your include statement. I like to practice specifying individual functions rather than create many PHP files. | unknown | |
d11337 | train | In your stacktrace seem to be a lot of trailing spaces. Maybe something like this could help:
//removes spaces as seen here: https://stackoverflow.com/questions/5455794/removing-whitespace-from-strings-in-java
Class cls = Class.forName("foo.bar.baz.turn."+ (nextStateName.replaceAll("\\s+","")));
A: Your stacktrace reads: java.lang.ClassNotFoundException: foo/bar/baz/turn/IncDecValue.
I have quickly tried to reproduce your problem and the exception which I had received has read java.lang.ClassNotFoundException: foo.bar.baz.turn.IncDecValue.
Slashes being displayed instead of dots might indicate that your classes are located in directories instead of packages. Or at least it was so when the exception has occured. | unknown | |
d11338 | train | Your expression:
$table-extrait-labels/transaction[reference = @MOVEMENT_ACCOUNTING_RECORD]
is looking for a transaction that has a child reference and a MOVEMENT_ACCOUNTING_RECORDattribute whose values are equal to each other.
If you want to find a transaction whose child reference value is equal to the value of the MOVEMENT_ACCOUNTING_RECORDattribute of the context node (in your case, the currently processed MOVEMENT), you need to use:
$table-extrait-labels/transaction[reference = current()/@MOVEMENT_ACCOUNTING_RECORD]
Or (preferably) construct a key and use it as shown in the examples I pointed to in the comments.
Untested, because no reproducible example was provided.
Note also that testing for the existence of a node and then getting data from that node would be more efficient if you use a variable, instead of repeating the XPath expression. | unknown | |
d11339 | train | Assuming you're using express-graphql, graphqlHTTP is a function that takes some configuration parameters and returns an express middleware function. Normally, it's used like this:
app.use(
'/graphql',
graphqlHTTP({ ... })
)
However, instead of an object, graphqlHTTP can also take a function that returns an object as shown in the docs.
app.use(
'/graphql',
graphqlHTTP((req, res, graphQLParams) => {
return { ... }
})
)
In this way, you can utilize the req, res or graphQLParams parameters inside your configuration object.
app.use(
'/graphql',
graphqlHTTP((req, res, graphQLParams) => {
return {
schema,
// other options
context: {
user: req.user,
// whatever else you want
}
}
})
) | unknown | |
d11340 | train | Seems like an open problem, thus I'd like to answer even though it's late. I am also unsure how much the similarity between the vectors would be affected, but in my practical experience you should first encode your features and then scale them. I have tried the opposite with scikit learn preprocessing.StandardScaler() and it doesn't work if your feature vectors do not have the same length: scaler.fit(X_train) yields ValueError: setting an array element with a sequence. I can see from your description that your data have a fixed number of features, but I think for generalization purposes (maybe you have new features in the future?), it's good to assume that each data instance has a unique feature vector length. For instance, I transform my text documents into word indices with Keras text_to_word_sequence (this gives me the different vector length), then I convert them to one-hot vectors and then I standardize them. I have actually not seen a big improvement with the standardization. I think you should also reconsider which of your features to standardize, as dummies might not need to be standardized. Here it doesn't seem like categorical attributes need any standardization or normalization. K-nearest neighbors is distance-based, thus it can be affected by these preprocessing techniques. I would suggest trying either standardization or normalization and check how different models react with your dataset and task.
A: After. Just imagine that you have not numerical variables in your column but strings. You can't standardize strings - right? :)
But given what you wrote about categories. If they are represented with values, I suppose there is some kind of ranking inside. Probably, you can use raw column rather than one-hot-encoded. Just thoughts.
A: You generally want to standardize all your features so it would be done after the encoding (that is assuming that you want to standardize to begin with, considering that there are some machine learning algorithms that do not need features to be standardized to work well).
A: So there is 50/50 voting on whether to standardize data or not.
I would suggest, given the positive effects in terms of improvement gains no matter how small and no adverse effects, one should do standardization before splitting and training estimator | unknown | |
d11341 | train | I think it's not a good practice to put a such functionality into model. | unknown | |
d11342 | train | Both actions (positive result and negative result) need to be inside the completion handler.
So something like this:
IsItemInFavoritesAsync(productId: productToDisplay.UniqueID()) { success in
if success {
// Product exists in favorite tree
Constants.showAlert(title: "Item in Tree", message: "Yes it is", timeShowing: 1, callingUIViewController: self)
}
else {
// Product doesn't exist in favorite tree
Constants.showAlert(title: "Item NOT in Tree", message: "Isn't", timeShowing: 1, callingUIViewController: self)
Product.RegisterProductOnDatabaseAsFavorite(prodToRegister: productToDisplay, productType: productType)
}
} | unknown | |
d11343 | train | Not sure if I'm missing something here, but have you tried just setting the crs in the dataframe before doing to_file() like below:
gdf = geopandas.GeoDataFrame(df, geometry='geometry')
gdf.crs = {'init' :'epsg:4326'} # or whatever
A:
Is there any way to specifiy the desired crs formatting
The shapefile prj file has to be in the WKT format. See:
https://gis.stackexchange.com/questions/114835/is-there-a-standard-for-the-specification-of-prj-files#114851
any built-in method to switch between the diffent formattings of crs attribute?
Geopandas uses pyproj as a dependency. If you use pyproj 2+, you can use the pyproj.CRS class to convert formats. See:
https://pyproj4.github.io/pyproj/stable/examples.html
Additionally, you can use the pyproj.CRS class directly to check for equality of different CRS inputs. | unknown | |
d11344 | train | Ok, i figure out where is the problem. This is because when you inside for make call Alphabytes.at(b/8) for the first time, the size of Alphabytes is zero, so you try get index that is out of array range. Replace this with Alphabytes[b/8] = (Alphabytes[b/8] |.......
A: It seems that in release mode everything compiles and even no errors or warnings etc are detected so this is strange it only happens in debug mode so I think ill report this in QT bug tracking maybe...
Unless anyone can figure out why this error only happens when debugging... | unknown | |
d11345 | train | Remove the slashes from the regex. In other words:
if(!vin.matches("^[^\\Wioq]{17}$")) {
A: Try this at home:
class Vin {
public static void main( String ... args ) {
String vin = "1M8GDM9A_KP042788";
if(!vin.matches("[^\\Wioq]{17}")) { //the offending code, always fails
System.out.println("vininfo did not pass regex");
} else {
System.out.println("works");
}
}
}
Prints:
$java Vin
works
You don't need the /^ and $/ | unknown | |
d11346 | train | (I'm the author of the plug-in that's causing the trouble here)
The example isn't working because this.value isn't referring to the speed (it's undefined). Here's an updated version of your example:
http://jsfiddle.net/eY6Z9/
It would probably be more efficient to store the speed value in a variable rather than within the text value of an HTML element. Here’s an updated version with that enhancement:
http://jsfiddle.net/eY6Z9/2/
Hope that helps! | unknown | |
d11347 | train | Your error comes from using wait(5000); as this is not defined.
You can use await when sending messages to ensure they are sent.
await message.author.send(`Here is an invite`)
message.member.kick('they kicked themself') | unknown | |
d11348 | train | You actually have to compare the popped value's corresponding closing character with chars[i], not the popped value itself.
So you need to do
if (stack.length === 0 || lookup[stack.pop()] !== chars[i]) {
Now, when you { from the stack, you will look for the corresponding closing character from the lookup and compare it with the current closing character.
Alternatively you can simply push the expected closing character in the stack so that you don't have do the lookup during the comparison, like this
stack.push(lookup[chars[i]]); | unknown | |
d11349 | train | ... effective memory "bandwidth" ... from main memory to CPU in a worst case scenario:
There are two "worst" scenarios: memory accesses which don't use (miss) CPU caches and memory accesses which accesses too far addresses and can't reuse open DRAM rows.
the RAM cache
The cache is not part of RAM, it is part of CPU, and named CPU cache (top part of memory hierarchy).
is made totally inefficient due to long distances in the successive addresses being treated.
Modern CPU caches has many builtin hardware prefetchers, which may detect non-random steps between several memory accesses. Many prefretchers will detect any step inside aligned 4 kilobyte (KB) page: if you access address1, then address1 + 256 bytes, then L1 prefetcher will start access of address1 + 256*2, address1 + 256*3 etc. Some prefetchers may try to predict out of 4 KB range. So, using only long distances between accesses may be not enough. (prefetchers may be disabled https://software.intel.com/en-us/articles/disclosure-of-hw-prefetcher-control-on-some-intel-processors)
As far as I understand what matters here is the RAM latency and not its bandwidth
Yes, there are some modes when RAM accesses are latency limited.
The scenario is this (say you work with 64 bits=8 bytes values):
You may work with 8 byte values; but you should consider that memory and cache work with bigger units. Modern DRAM memory has bus of 64 bits (8bytes) wide (72 bits for 64+8 in case of ECC), and many transactions may use several bus clock cycles (burst prefetch in DDR4 SDRAM uses 8n - 8 * 64 bits.
Many transactions between CPU cache and memory controller are bigger too and sized as full cache line or as half of cache line. Typical cache line is 64 bytes.
you read data at an address
make some light weight CPU computation (so that CPU is not the bottleneck)
then you read data at new address quite far-away from the first one
This method is not well suitable for modern out-of-order CPUs. CPU may speculatively reorder machine commands and start execution of next memory access before current memory access is done.
Classic tests for cpu cache and memory latency (lat_mem_rd from lmbench http://www.bitmover.com/lmbench/lat_mem_rd.8.html and many others) use memory array filled with some special pseudo-random pattern of pointers; and the test for read latency is like (https://github.com/foss-for-synopsys-dwc-arc-processors/lmbench/blob/master/src/lat_mem_rd.c#L95)
char **p = start_pointer;
for(i = 0; i < N; i++) {
p = (char **)*p;
p = (char **)*p;
... // repeated many times to hide loop overhead
p = (char **)*p;
}
So, address of next pointer is stored in the memory; cpu can't predict next address and start next access speculatively, it will wait for read data from caches or from memory.
I'd like to have an idea of the throughput (say in bytes).
It can be measured in accesses per second; for byte accesses, or word accesses or 8 byte accesses there will be similar number of accesses/s and throughput (bytes/s) will be multiplied for unit used.
Sometimes similar value is measured - GUPS - guga-updates per second (data in memory is read, updated and written back) with test of Random Access. This test can use memory of computing cluster of hundreds (or tens of thousands) of PC - check GUP/s column in http://icl.cs.utk.edu/hpcc/hpcc_results.cgi?display=combo
A simple calculation assuming the RAM has typical DDR3 13 ns latency yields a bandwidth of 8 B/ 13 ns = 600 MB/s. But this raises several points:
RAM has several latencies (timings) - https://en.wikipedia.org/wiki/Memory_timings
And 13 ns CAS is only relevant when you access opened Row. For random accesses you will often access closed row and T_RCD latency is added to CAS. | unknown | |
d11350 | train | Solution 1 should be working (see here and here). If it doesn't work, please attach log file for analysis.
Options 2 and 3 are for something totally different (verification of intent resolution confidence), and Option 4 is not a Botium feature.
What you can try as well: Botium by default does substring matching for assertions, so your convo file could look something like this:
#me
what is the date today ?
#bot
Today is | unknown | |
d11351 | train | I've also run into this issue in one of my CDK projects. The problem I discovered was that I had compiled .js files alongside my .ts files in my working directory. The relative imports selected the stale .js files instead of compiling the .ts files as another es module. These .js files were generated after running cdk deploy locally.
Solution: delete any .js files alongside your .ts files.
I hope this helps someone else in the future. | unknown | |
d11352 | train | break the loop when you find first match.
for (int i = 0; i <= n - 1; i++) {
if (iv[i] == a) {
hely = i;
break;
}
}
A: You need to exit the for loop after finding the first match:
for (int i = 0; i <= n - 1; i++) {
if (iv[i] == a) {
hely = i;
break;
}
} | unknown | |
d11353 | train | Move line? I really like IntelliJ IDEA's "Move statement" shortcut (Ctrl + Shift + ↑/↓). However -- I am not sure if this is a bug releated to ActionScript editing only -- move statement is not always what I want and sometimes it is not correct when editing AS code.
So I just want to move a block of lines up/down. The Eclipse shortcut is Alt + ↑/↓ and does not move statement-wise. Is there an equivalent in IntelliJ IDEA?
A: As other people have said this is already available as a command. You can configure the short cut to your liking, but by default (at least in IntelliJ 10) it is bound to ALT + SHIFT + ↑ and ALT + SHIFT + ↓
A: shift + alt + ↑/↓
you can find All short cuts HERE
https://resources.jetbrains.com/storage/products/intellij-idea/docs/IntelliJIDEA_ReferenceCard.pdf
A: Please find some useful shortcut for IntelliJ:
(1) IntelliJ Debugger
Step over (Go To next Step or line) : F8
Step into (Go into function) : F7
Smart step into : Shift + F7
Step out : Shift + F8
Run to cursor : Alt + F9
Evaluate expression : Alt + F8
Resume program : F9 [Mac = Cmd + ALT + R]
Toggle breakpoint : Ctrl + F8 [Mac = Cmd + F8]
View breakpoints : Ctrl + Shift + F8 [Mac = Cmd + Shift + F8]
(2) Open Specific File
Ctrl + Shift + N
(3) Open All Methods Implemented in class
Open specific class and press,
Ctrl + F12
(4) Go to Specific Line Number
Ctrl + G
(5) Method Implementation and Declaration
Declaration : Ctrl + B
Implementation : Ctrl + Alt + B
Response Type Declaration : Ctrl + Shift + B
Super class override Method : Ctrl + U
(6) Reformate Code
Ctrl + Alt + L
(7) Import relevant class
Click on relevant class (Red color field) and press,
Alt + Enter
Select valid class as per requirement
(8) Hierarchy of method calls
Select specific method and press,
Ctrl + Alt + H
(9) Comment In Code
Single Line : Select specific line and press,
Ctrl + /
Multiple Line : Select Multiple Line and Press,
Ctrl + Shift + /
(Note : Same operation for uncomment the code)
(10) Display Line Number
Hit Shift twice > write "line" > Show Line Numbers (the line doesn't have the toggle)
View > Active Editor > Show Line Number
(11) Code Selection
Full class selection : Ctrl + A
Method Selection : Select Method Name and press, Ctrl + W
(12) Basic Code Completion
To complete methods, keywords etc press,
Ctrl + Space
(13) Code Copy and Paste
Copy : Ctrl + C
Paste : Ctrl + V
(14) Search Operation
Specific File : Ctrl + F
Full Project : Ctrl + Shift + F
(15) Switcher Popup
Open Switcher Popup : Ctrl + Tab
Continue press Ctrl and use ↑/↓/←/→ for move one place to another
(16) Forward Move & Backward Move
Backward : Ctrl + Alt + ← (Left-Arrow)
Forward : Ctrl + Alt + → (Right-Arrow)
(17) Next/previous highlighted error
F2 or (Shift + F2)
(18) Open Java Doc
Select specific method name and press,
Ctrl + Q
(19) Find All commands
Ctrl + Shift + A
(20) Move Line Up/Down
shift + alt + ↑/↓
Thanks...
A: The LineMover plug-in works very well and is an acceptable solution.
A: Open Settings -> Keymap then search for "move line" via the upper right searchbox.
Under the Code folder you'll see:
*
*Move Statement Down
*Move Statement Up
*Move Line Down
*Move Line Up
The actions you're looking for are (as you may guess) the move line actions.
A: try command+shift+up/down this will auto adjust the indentation
A: You can move several lines together with move statement. Are you trying to move partial lines? I don't think there's a way in Idea. | unknown | |
d11354 | train | I wouldn't say one of them is much simpler then the other. Most often Tycho is preferred since Maven is de facto an industry standard for builds. As such Maven skills are more common. This also allows you to build an RCP product like any other application using Maven.
In Maven/Tycho both the pom.xml and the OSGi manifest include dependency information so there's a bit of redundancy. The idea is to have one of these files be the master version. If you choose OSGi files to be master then resulting approach is called manifest-first. Choose otherwise and you end up with POM-first.
I'm using manifest-first and my POM files for plugins look like this:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>aerie-sdk</groupId>
<artifactId>aerie-sdk</artifactId>
<version>3.0.5-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>aerie-sdk</groupId>
<artifactId>com.example.aerie.ui</artifactId>
<version>2.5.5-SNAPSHOT</version>
<packaging>eclipse-plugin</packaging>
</project>
I only need to bump versions in POM files to match the ones in manifest files.
Answers:
*
*I'd say the one you're more familiar with is simpler. If you're new to Maven but have some experience with PDE build then Maven is harder. For Maven POM-first there's a need to tweak POM files quite often. If you choose manifest-first then POMs are not modified that frequently (and changes are simpler - mostly version changes).
*You can run Ant from Maven, no need to convert.
*Not really. If you hit any bumps there's a lot of information on Eclipse community forums, stackoverflow or similar sites.
A: I'm not sure your comparison makes sense at all. The Eclipse PDE exporter is an interactive tool, and as such cannot be used for an automated build. So if you need an automated build, you will have move to something else, e.g. Tycho.
Tycho is a Maven build extension which aims to make it easy to set up an automated build for projects developed with the Eclipse PDE. It re-uses the configuration files that you already have (MANIFEST.MF, feature.xml, *.product), so the additional configuration files needed for Tycho (pom.xml) are pretty minimal and rarely need to be updated.
The only piece of information which is redundant in the pom.xml file is the artifact version. To support you when updating artifact versions, there is a command-line tool: the tycho-versions-plugin. | unknown | |
d11355 | train | You're asking about invokable objects, or treating objects as functions. To be able to do that, you need to implement __invoke() method, like this:
class Foo {
public function __invoke()
{
echo 'invoke!';
}
}
Read more: http://php.net/manual/en/language.oop5.magic.php#object.invoke
Also, you can call $foo->bar; as many times as you want, just as you can write: $a = 5; $a; - but it will have no effect. It's just a field. If it's a callable, you'll have to add parentheses to call it. | unknown | |
d11356 | train | I would argue that you shouldn't need to do this - the caller can handle that with a lambda expression. For example:
int x = session.GetVal<int>("index", () => "something".IndexOf("o"));
Here we're capturing the idea of calling IndexOf on "something" passing in the argument "o". All of that is captured in a simple Func<int>.
A: You can add an overload to your function
public static T GetVal<T>(this HttpSessionStateBase Session, string key, Func<IList<object>,T> getValues, IList<object> args)
{
if (Session[key] == null)
Session[key] = getValues(args);
return (T)Session[key];
}
A: You'll have to define your own delegate rather than Func. The following will work perfectly here:
public delegate TResult ParamsFunc<TResult>(params object[] args); | unknown | |
d11357 | train | Use UPSERT statement, most of the database support UPSERT statements. the return value would be number of records updated or inserted. Though You have not provided the database against which you have performing the update or insert operation.
UPDATE
Oracle 11g does support upsert
MERGE INTO KP_TBL USING DUAL ON (MY_KEY= #{myKey})
WHEN MATCHED THEN
UPDATE SET OTHER_PARAM = #{myOtherParam},
SEC_PARAM = #{sec_param}
WHEN NOT MATCHED THEN
INSERT (MY_KEY, OTHER_PARAM,SEC_PARAM) VALUES(#{myKey},#{myOtherParam},#{sec_param)
A: This could be done using means of SQL. Unfortunately you didn't mention which particular database management systen you use as there is no such feature in ANSI SQL.
For example if you use MySQL INSERT ... ON DUPLICATE KEY UPDATE is the way to go.
E.g.:
<insert id="instSample" parameterType="SampleModel">
INSERT INTO table (a,b,c) VALUES (#{a}, #{b}, #{c})
ON DUPLICATE KEY UPDATE c = #{c};
</insert> | unknown | |
d11358 | train | We don't need the sd tool to reproduce this behavior. Here it is in pure Rust:
let re = regex::Regex::new(r"(?P<n>b)").unwrap();
let before = "abc";
assert_eq!(re.replace_all(before, "$nB"), "ac");
assert_eq!(re.replace_all(before, "${n}B"), "abBc");
The brace replacement syntax isn't described in the front documentation but on the documentation of the replace method:
The longest possible name is used. e.g., $1a looks up the capture
group named 1a and not the capture group at index 1. To exert more
precise control over the name, use braces, e.g., ${1}a.
In short, unless there's a character that can't be part of a name just after in the replacement pattern, you should always put the group name between braces.
A: For the rust regexes via sd via bash use case in the question:
echo 'abc' | sd -p '(?P<cg>b)' '${cg}B' | unknown | |
d11359 | train | I think you are confusing the number of characters (valid) and the type of error -
else if (valid == 9)
result = result + " Needs digit and lowercase letter";
could be produced from 123456abc
As valid == 9 really only counts the characters in the set. Separate counting and whether character classes are used.
A: It's better to use some flags (bool variables) instead of one number.
If you want to use one number, you must create 2^(things to check) situations
using that number. Here 2^4 = 16 situations is required.
One of the easiest way to mix flags in one number is this:
nth digit = nth flag
for example use this order (length, lower, upper, digit).
So,
to set length validity, add 1000 to number;
to set lower validity, add 100 to number;
to set upper validity, add 10 to number;
to set digit validity, add 1 to number;
Now,
((number)%10 == 1) means digit validity
((number/10)%10 == 1) means upper validity
((number/100)%10 == 1) means lower validity
((number/1000)%10 == 1) means length validity
Following code uses separate flags:
#include<iostream>
#include<string>
using namespace std;
class PasswordStatus
{
bool len, low, up, dig;
//============
public:
PasswordStatus()
{
len = low = up = dig = false;
}
//-----------
void ShowStatus()
{
cout << endl << "Password Status:" << endl;
cout << "Length : " << (len ? "OK" : "Too Short") << endl;
cout << "Contains Lower Case : " << (low ? "Yes" : "No") << endl;
cout << "Contains Upper Case : " << (up ? "Yes" : "No") << endl;
cout << "Contains Digit : " << (dig ? "Yes" : "No") << endl;
}
//-----------
void checkValidity(string pass)
{
int sLen = pass.length();
len = (sLen >= 6);
for(int i = 0; i<sLen; i++)
{
char c = pass[i];
if(!low && islower(c)) {low = true; continue;}
if(!up && isupper(c)) {up = true; continue;}
if(!dig && isdigit(c)) {dig = true; continue;}
}
}
//-----------
bool IsTotalyValid()
{
return low && up && dig && len;
}
};
//====================================================================
int main()
{
PasswordStatus ps;
string pw;
do
{
cout << endl << "Enter password: " << endl;
cin >> pw;
ps.checkValidity(pw);
ps.ShowStatus();
} while (!ps.IsTotalyValid());
cout << "Valid Password : " << pw;
return 0;
} | unknown | |
d11360 | train | Will suggest more easy and less complicated approach and bootstrap framework will handle the rest.
replace following
<td class="action">
<input type="submit" value=" Reply" href="javascript:;" onclick="jQuery('#modal-6').modal('show', {backdrop: 'static'});" style="background-color: #313437;border:1px solid #313437" class="btn btn-primary btn-single btn-sm fa-input">
<input type="text" hidden="" value="<?php echo $i;?>" name="id"></input>
</td>
With
<td class="action">
<button type="button" data-toggle="modal" data-target="#modal-6" data-id="This Is Id" class="btn btn-primary btn-single btn-sm fa-input">Reply</button>
</td>
Use data-toggle and data-target attributes to open the modal and with additional data-id attribute and show.bs.modal or shown.bs.modal modal events, can pass the value to modal
$(document).ready(function() {
$('#modal-6').on('show.bs.modal', function(e) {
var id = $(e.relatedTarget).data('id');
alert(id);
});
});
To keep the backdrop: 'static' add it in Modal HTML Markup
<div class="modal fade validate" id="modal-6" data-backdrop="static" data-keyboard="false">
<!--- Rest of the modal code ---->
</div>
A: I have been inspirited by @Shehary but had to use it with little different syntax.
$(document).ready(function() {
$('#modal-6').on('show.bs.modal', function(e) {
var id = $(e.relatedTarget.id).selector;
alert(id);
});
}); | unknown | |
d11361 | train | Perhaps the draw method is not returning anything. Try changing your code to this:
message1 = Text(Point(50,50), "Click")
message1.draw(win)
message1.setText("")
A: I'm not sure how to answer your second question properly..so I'll just do it as an answer here.
The reason the first does not work is because you are assigning the return value of Text.draw to message. Since it returns nothing, then message is None.
In the working code, you assign message with the type Text and initialize the object. You then call the draw method of this object, and the setText method.
In the non-working code, you are calling the draw method on a new Text object, then assigning the return of that - that is, NoneType - to message. And since None has no setText method, you get an error.
(Sorry if I have mixed up NoneType and None there) | unknown | |
d11362 | train | The reason for pre-flight failing is a standard problem with CORS:
How does Access-Control-Allow-Origin header work?
It is fixed by having server to set correct Access-Control-Allow-Origin.
However, the root cause of your problem could be in misconfiguration of your server or your client, since 500 is the server internal error. So it might not be accessing intended server code, but some generic handler in apache or whatever is hosting the service. Most of the generic error handlers dont provide CORS support as default.
Note, if you are using REST API, 500 is not a code you should see in a successful API use scenario. It means that the server is not capable of handling the request properly.
Maybe this link would have relevant information:
EXCEPTION: Response with status: 0 for URL: null in angular2 | unknown | |
d11363 | train | The problem is with z-index: -100 on your #holder element. Click events are ending on the body element and not making it down through to the link. Remove the negative z-index and it should work. | unknown | |
d11364 | train | The selecor is wrong
if you have
var pull = $('#pull');
Wouldn't it be
pull.on('click', function(e) {...
or just
$('#pull').on(...)
A: Try this. It'll execute the slideToggle function when a nav item is clicked.
menu.find('a').click(function(){
menu.slideToggle();
});
A: If you want slideUp the Menu when you click outside of it you can use the method
stopPropagation();
http://jsfiddle.net/Joseph82/jwNEZ/ | unknown | |
d11365 | train | Got it.
The clients IdentityToken expired, and since that api method wasn't protected by [Authorize] attribute, it didn't renew + changed the script to use an iframe instead of AJAX-calls!
Case closed. | unknown | |
d11366 | train | I believe
tab[i + 1] = 0;
is the problem. From previous loop execution the control will only come out when i<m is false, i.e., for the case when i == m, as per the increment statement.
Now, you want to put the 0 at this index, not at index + 1.
Change to tab[i] = 0;.
Now, the probable reason for the negative number: The content of the memory location returned by malloc() is indeterminate, so if you miss to write the index when i == m (you were writing from 0 to m-1, then from m+1 to m+n-1), while reading it'll return that indeterminate value.
A: The problem is in your tab[i + 1] = 0; line. When the preceding loop has finished, then i will be equal to m — that is to say, the i value will already be "one beyond the end".
So, just change that to: tab[i] = 0; and you should be fine. | unknown | |
d11367 | train | I think you are giving wrong path and that's why you are getting 404 error.You can use following code.
background-image: url{"image/img.jpg"};
A: There could be multiple issues like :
*
*Incorrect directory-URL: Try specifying the whole URL - instead of "/image/img.jpg" try "/templates/image/img.jpg".
*Misspelled directory-name - check it properly.
*Reassure yourself that the specified image is in the correct format (.jpg).
A: The (Flask) standard way of doing this, which you'll see done in examples and tutorials, is to put static files in a static folder at the root of the project. Assuming your CSS is also in static,
background-image: url('/static/img.jpg');
does what you want. But if you're embedding styles in templates,
background-image: url({{ url_for('static', filename='img.jpg') }});
is preferred, though the former will work just fine.
If things get more complicated and you start using blueprints, consult the documentation. | unknown | |
d11368 | train | ) drive and record any errors encountered during the process. Then removes it to map the following item.
When some drives are mapped, depending on the user's administrative privileges, it may display an "access is denied" error message.
How can I write that message to a file in case it occurs.
Set objNetwork = WScript.CreateObject("WScript.Network")
' loop through array and write shared servers to reult excel sheet
For Each item in arrServerValues
Echo item
objNetWork.MapNetworkDrive "A:", item
' record potential error message to a result file: Results.txt
objNetWork.RemoveNetworkDrive "A:"
Next
A: Learn about the following:
*
*On Error statement: to catch the error
*Err object: to grab the error message
*File operations (File System Object): to write to a file | unknown | |
d11369 | train | TextRank implementations tend to be lightweight and can run fast even with limited memory resources, while the transformer models such as BERT tend to be rather large and require lots of memory. While the TinyML community has outstanding work on techniques to make DL models run within limited resources, there may be a resource advantage for some use cases.
Some of the TextRank implementations can be "directed" by adding semantic relations, which one can consider as a priori structure to enrich the graph used -- or in some cases means of incorporating human-in-the-loop approaches. Those can provide advantages over supervised learning models which have been trained purely on data. Even so, there are similar efforts for DL in general (e.g., variations on the theme of transfer learning) from which transformers may benefit.
Another potential benefit is that TextRank approaches tend to be more transparent, while transformer models can be challenging in terms of explainability. There are tools that help greatly, but this concern becomes important in the context of model bias and fairness, data ethics, regulatory compliance, and so on.
Based on personal experience, while I'm the lead committer for one of the popular TextRank open source implementations, I only use its extractive summarization features for use cases where a "cheap and fast" solution is needed. Otherwise I'd recommend considering more sophisticated approaches to summarization. For example, I recommend keeping watch on the ongoing research by the author of TextRank, Rada Mihalcea, and her graduate students at U Michigan.
In terms of comparing "Which text summarization methods work better?" I'd point toward work on abstractive summarization, particularly recent work by John Bohannon, et al., at Primer. For excellent examples, check the "Daily Briefings" of CV19 research which their team generates using natural language understanding, knowledge graph, abstractive summarization, etc. Amy Heineike discusses their approach in "Machines for unlocking the deluge of COVID-19 papers, articles, and conversations". | unknown | |
d11370 | train | Assuming the Date column is the index.
*
*Stacking will drop nan by default
*Align with 'inner' logic
*Check equality
*Group and check all True
pd.Series.eq(*df1.stack().align(df2.stack(), 'inner')).groupby(level=1).all()
If Date is not the index
pd.Series.eq(
*df1.set_index('Date').stack().align(
df2.set_index('Date').stack(), 'inner'
)
).groupby(level=1).all()
A: Check with eq and isnull Data from user3483203
((df1.eq(df2))|df2.isnull()|df1.isnull()).all(0)
Out[22]:
A True
B True
C False
dtype: bool
A: Using fillna with eq
df2.fillna(df1).eq(df1).all(0)
A True
B True
C False
dtype: bool
This works by filling in NaN values with valid values from df1, so they will always be equal where df2 is null (essentially the same as ignoring them). Next, we create a boolean mask comparing the two arrays:
df2.fillna(df1).eq(df1)
A B C
2000-01-01 True True True
2000-01-02 True True True
2000-01-03 True True True
2000-01-04 True True False
2000-01-05 True True False
Finally, we assert that all the values for each column are True, in order for the columns to be considered equal.
Setup
It looks like you copied the wrong DataFrame for df1 based on your desired output and merge, so I derived it from your merge:
df1 = pd.DataFrame({'A': {'2000-01-01': 3.0, '2000-01-02': 5.0, '2000-01-03': 1.0, '2000-01-04': 2.0, '2000-01-05': 1.0}, 'B': {'2000-01-01': 4.0, '2000-01-02': 9.0, '2000-01-03': 6.0, '2000-01-04': 4.0, '2000-01-05': 3.0}, 'C': {'2000-01-01': 5.0, '2000-01-02': 2.0, '2000-01-03': 5.0, '2000-01-04': 1.0, '2000-01-05': 3.0}})
df2 = pd.DataFrame({'A': {'2000-01-01': np.nan, '2000-01-02': 5.0, '2000-01-03': 1.0, '2000-01-04': 2.0, '2000-01-05': 1.0}, 'B': {'2000-01-01': np.nan, '2000-01-02': np.nan, '2000-01-03': np.nan, '2000-01-04': 4.0, '2000-01-05': 3.0}, 'C': {'2000-01-01': np.nan, '2000-01-02': np.nan, '2000-01-03': 5.0, '2000-01-04': 8.0, '2000-01-05': 4.0}}) | unknown | |
d11371 | train | Need to add
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.authentication.AuthenticationManager;
@Configuration
public class UserSecurityConfig extends WebSecurityConfigurerAdapter {
@Bean
public AuthenticationManager getAuthenticationManager() throws Exception {
return super.authenticationManagerBean();
}
}
A: Seems that I've found the solution to the issue, besides that we need add spring-boot-starter-security dependency, starting from spring-security 5.x bean of type AuthenticationManager is not autoconfigured for you, you need to define one by yourself
see: https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide#authenticationmanager-bean
A: Alright, so in recent versions of Spring it is not enough to simply add the bean in your configurer class, you actually need to do something else.
Whatever dependency missing to autowire or to tcreate a bean, has to be specified in the main class of your application. For your case, let's say your WebSecurityConfigurerAdapter is under the package "configure", do the folowing:
@SpringBootApplication
@ComponentScan({"controller", "services", "configure"}) // This is what you need
@EntityScan("com.my.sso.server.models")
public class YourApplicationMainClass { ...
Also, when adding and overriding the bean, make sure @Override is above @bean:
@Override
@Bean
public AuthenticationManager authenticationManagerBean() throws Exception {
return super.authenticationManagerBean();
}
A: The missing class is in spring-security-core. Try adding this dependency:
compile ('org.springframework.boot:spring-boot-starter-security') | unknown | |
d11372 | train | This is not elegant, but it works as requested until all 1000 codes are used:
select x from
(select concat(hundreds, tens, ones) x from
(select 1 as ones union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9 union select 0) ones,
(select 1 as tens union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9 union select 0) tens,
(select 1 as hundreds union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9 union select 0) hundreds) thousand
where thousand.x not in
(select code from tempcode)
order by rand() limit 1; | unknown | |
d11373 | train | Modifying the source code of the FilePickerImplementation plugin for the iOS platform worked, in this way:
using Foundation;
using MobileCoreServices;
using Plugin.FilePicker.Abstractions;
using System;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using UIKit;
using System.Diagnostics;
namespace Plugin.FilePicker
{
/// <summary>
/// Implementation for FilePicker
/// </summary>
public class FilePickerImplementation : NSObject, IUIDocumentMenuDelegate, IFilePicker
{
private int _requestId;
private TaskCompletionSource<FileData> _completionSource;
/// <summary>
/// Event which is invoked when a file was picked
/// </summary>
public EventHandler<FilePickerEventArgs> Handler
{
get;
set;
}
private void OnFilePicked(FilePickerEventArgs e)
{
Handler?.Invoke(null, e);
}
public void DidPickDocumentPicker(UIDocumentMenuViewController documentMenu, UIDocumentPickerViewController documentPicker)
{
documentPicker.DidPickDocument += DocumentPicker_DidPickDocument;
documentPicker.WasCancelled += DocumentPicker_WasCancelled;
documentPicker.DidPickDocumentAtUrls += DocumentPicker_DidPickDocumentAtUrls;
UIApplication.SharedApplication.KeyWindow.RootViewController.PresentViewController(documentPicker, true, null);
}
private void DocumentPicker_DidPickDocumentAtUrls(object sender, UIDocumentPickedAtUrlsEventArgs e)
{
var control = (UIDocumentPickerViewController)sender;
foreach (var url in e.Urls)
DocumentPicker_DidPickDocument(control, new UIDocumentPickedEventArgs(url));
control.Dispose();
}
private void DocumentPicker_DidPickDocument(object sender, UIDocumentPickedEventArgs e)
{
var securityEnabled = e.Url.StartAccessingSecurityScopedResource();
var doc = new UIDocument(e.Url);
var data = NSData.FromUrl(e.Url);
var dataBytes = new byte[data.Length];
System.Runtime.InteropServices.Marshal.Copy(data.Bytes, dataBytes, 0, Convert.ToInt32(data.Length));
string filename = doc.LocalizedName;
string pathname = doc.FileUrl?.ToString();
// iCloud drive can return null for LocalizedName.
if (filename == null)
{
// Retrieve actual filename by taking the last entry after / in FileURL.
// e.g. /path/to/file.ext -> file.ext
// filesplit is either:
// 0 (pathname is null, or last / is at position 0)
// -1 (no / in pathname)
// positive int (last occurence of / in string)
var filesplit = pathname?.LastIndexOf('/') ?? 0;
filename = pathname?.Substring(filesplit + 1);
}
OnFilePicked(new FilePickerEventArgs(dataBytes, filename, pathname));
}
/// <summary>
/// Handles when the file picker was cancelled. Either in the
/// popup menu or later on.
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
public void DocumentPicker_WasCancelled(object sender, EventArgs e)
{
{
var tcs = Interlocked.Exchange(ref _completionSource, null);
tcs.SetResult(null);
}
}
/// <summary>
/// Lets the user pick a file with the systems default file picker
/// For iOS iCloud drive needs to be configured
/// </summary>
/// <returns></returns>
public async Task<FileData> PickFile()
{
var media = await TakeMediaAsync();
return media;
}
private Task<FileData> TakeMediaAsync()
{
var id = GetRequestId();
var ntcs = new TaskCompletionSource<FileData>(id);
if (Interlocked.CompareExchange(ref _completionSource, ntcs, null) != null)
throw new InvalidOperationException("Only one operation can be active at a time");
var allowedUtis = new string[] {
UTType.UTF8PlainText,
UTType.PlainText,
UTType.RTF,
UTType.PNG,
UTType.Text,
UTType.PDF,
UTType.Image,
UTType.UTF16PlainText,
UTType.FileURL
};
var importMenu =
new UIDocumentMenuViewController(allowedUtis, UIDocumentPickerMode.Import)
{
Delegate = this,
ModalPresentationStyle = UIModalPresentationStyle.Popover
};
UIApplication.SharedApplication.KeyWindow.RootViewController.PresentViewController(importMenu, true, null);
var presPopover = importMenu.PopoverPresentationController;
if (presPopover != null)
{
presPopover.SourceView = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
presPopover.PermittedArrowDirections = UIPopoverArrowDirection.Down;
}
Handler = null;
Handler = (s, e) => {
var tcs = Interlocked.Exchange(ref _completionSource, null);
tcs?.SetResult(new FileData(e.FilePath, e.FileName, () => { var url = new Foundation.NSUrl(e.FilePath); return new FileStream(url.Path, FileMode.Open, FileAccess.Read); }));
};
return _completionSource.Task;
}
public void WasCancelled(UIDocumentMenuViewController documentMenu)
{
var tcs = Interlocked.Exchange(ref _completionSource, null);
tcs?.SetResult(null);
}
private int GetRequestId()
{
var id = _requestId;
if (_requestId == int.MaxValue)
_requestId = 0;
else
_requestId++;
return id;
}
public async Task<bool> SaveFile(FileData fileToSave)
{
try
{
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
var fileName = Path.Combine(documents, fileToSave.FileName);
File.WriteAllBytes(fileName, fileToSave.DataArray);
return true;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
return false;
}
}
public void OpenFile(NSUrl fileUrl)
{
var docControl = UIDocumentInteractionController.FromUrl(fileUrl);
var window = UIApplication.SharedApplication.KeyWindow;
var subViews = window.Subviews;
var lastView = subViews.Last();
var frame = lastView.Frame;
docControl.PresentOpenInMenu(frame, lastView, true);
}
public void OpenFile(string fileToOpen)
{
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
var fileName = Path.Combine(documents, fileToOpen);
if (NSFileManager.DefaultManager.FileExists(fileName))
{
var url = new NSUrl(fileName, true);
OpenFile(url);
}
}
public async void OpenFile(FileData fileToOpen)
{
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
var fileName = Path.Combine(documents, fileToOpen.FileName);
if (NSFileManager.DefaultManager.FileExists(fileName))
{
var url = new NSUrl(fileName, true);
OpenFile(url);
}
else
{
await SaveFile(fileToOpen);
OpenFile(fileToOpen);
}
}
}
}
A: To answer my own question, i customized in plugin, like
*
*Used DocumentPicker_DidPickDocumentAtUrls event instead of DocumentPicker_DidPickDocument.
*while returning selected file used
new FileData(e.FilePath, e.FileName, () =>
{
var url = new Foundation.NSUrl(e.FilePath);
return new FileStream(url.Path, FileMode.Open, FileAccess.Read);
})
This solved my issue. Thanks.
A: There are several forks of the FilePicker Xamarin plugin. I recommend the following project, since it's the most actively maintained one:
https://github.com/jfversluis/FilePicker-Plugin-for-Xamarin-and-Windows (note: I'm one of the contributors to the project).
With this version of the plugin file picking should work. The example code from sandeep's answer was already incorporated into the latest version of the plugin. Be sure to read the README.md's Troubleshooting page in case you're getting problems. | unknown | |
d11374 | train | When you process image on server, use image manipulation library (getimagesize for example) to detect it's width and height. When this fails, reject the image. You will probably do it anyway to generate thumbnail, so it is like one extra if.
A: There are many ways of checking the actual files. How Facebook does it, only the ones who created it know i think :).
Most likely they will look at the first bytes in the file. All files have certain bytes describing what they truely are. For this however you need loads of time/money creating a database or such against which you can validate the uploads.
More common solutions are;
FORM attribute
In a lot of browsers, of course excluding Internet Explorer, you can set an accept attribute which checks on extensions client side. More info here: File input 'accept' attribute - is it useful?
Extension
This is not realy secure, for a script can be saved with an image extension
Read file MIME TYPE
This is a solution like you stated in your question. This however is also easy to bypass and relies on the up-to-date status of your server.
Processing the image
The most reliable (for most developer skills and available time) would be to process the image as a test.
Put it in a library like GD or Imagic. They will raise errors when an image is not realy an image. This however will require you to keep that software up to date.
In short, there is not a 100% guarantee to catch this without spending tons of hours. Even then you only get 99,9%. You should weigh your available time against the above options and choose which best suits you. As best practice i recommend a combination of all 3.
This topic is also discussed in Security: How to validate image file uploads?
A: Headers in your file won't be the same. | unknown | |
d11375 | train | How is this possible from within the sandbox?
I have ported the Python interpreter to WinRT to achieve that. Instead of using Win32 API, it now uses WinRT API (in particular for reading files from the user's Documents folder).
Can I run a file (say test.py on the desktop) from my JavaScript app?
In principle, yes. You need to take the python33.dll from my app, wrap it as a WinRT component, and then call it. It actually is a WinRT component already, but doesn't expose any of the Python API. See
http://hg.python.org/sandbox/loewis/file/ee9f8c546ddd/win8app/python33
A: Basically, what you need to do is write a C++ "shell" app according to the Metro rules, then host the Python interpreter inside that C++ app. And again, scrub the Python codebase so that it runs within the Metro sandbox.
You can then go a step further, and have the C++ shell expose WinRT libraries to the Python environment. There's probably a way to get Python to expose WinRT objects, but that would be a LOT of work.
You won't be able to call directly from JavaScript into Python - you'll need a WinRT component in the middle.
This is a lot of work, and requires some fairly low level work in C.
A: You have two options:
*
*add python.exe to your PATH and, from within a terminal, enter the folder that contains your test.py and run python.exe test.py, using your current installation
*you can follow step1 and step2 of this post and install a brand new python from scratch
If you can provide some more informations, we can help you better.
Can I run a file (say test.py on the desktop) from my JavaScript app?
This can be your real problem, what do you mean with this statement? generally the answer would be "no, you can't" | unknown | |
d11376 | train | First, iframe src is never .blade.php file. You can create a route /game and map that route to controller which then returns the .blade.php view. So, in your view:
<iframe src="{{URL::to('/')}}/game" width="1519" height="690"></iframe>
And then in web.php
Route::get('game', 'HomeController@game');
And in HomeController.php:
public function game(){
return view('game');
}
In which file are you writing tag? What's the full error that you are getting? Maybe enclosing your variables inside quotes like this will solve the problem.
<script>
var userID = "{{ auth()->user()->id }}";
var userCredit = "{{ auth()->user()->id }}";
</script> | unknown | |
d11377 | train | You need also to set the current fontface to the testTextView. This solved my problem.
private void refitText(String text, int targetFieldWidth, int targetFieldHeight) {
//
// Bla bla bla codes
//
//
// THIS IS THE FIX
// Put the current typeface of textview to the test-textview
mTestView.setTypeface(getTypeface());
// Initialize the dummy with some params (that are largely ignored anyway, but this is
// mandatory to not get a NullPointerException)
mTestView.setLayoutParams(new LayoutParams(targetFieldWidth, targetFieldHeight));
// maxWidth is crucial! Otherwise the text would never line wrap but blow up the width
mTestView.setMaxWidth(targetFieldWidth);
if (mSingleLine) { | unknown | |
d11378 | train | Actually, you don't need the API key because, Google Maps Javascript API V3 is used there.
Only Google maps Javascript V2 requires an api-key, and it's deprecated now. | unknown | |
d11379 | train | Grails performs validation via a Domain's constraints block. For example:
class User {
String username
String password
static constraints = {
username nullable: false, maxSize: 50, email: true
password nullable: false, maxSize: 64
}
}
See documentation.
Validation is performed during a couple of different actions on the domain:
user.save() // validates and only persists if no errors
user.validate() // validates only
Again, see the documentation. This is similar to what Spring's @Valid does. Looking at its documentation, it states:
Spring MVC will validate a @Valid object after binding so-long as an
appropriate Validator has been configured.
What makes this basically the same as what Grails is doing is that it occurs after binding. For JSON/XML to Domain object conversion, it is really as simple as this:
def jsonObject = request.JSON
def instance = new YourDomainClass(jsonObject)
See this answer here. | unknown | |
d11380 | train | Its fire on every request, it may be images, scripts, handlers, pages, what ever.
If you debug and step on it you can see what files calls it. You can also place this line inside to see what is calling it live.
Debug.Write("call from: " + HttpContext.Current.Request.Path); | unknown | |
d11381 | train | Something like a lightbox, perhaps?
I know the specific interaction you're asking about, and personally, I've always found it really obnoxious.
A: If you don't want to use a server side script to automatically create a smaller version of the image that you would link to the larger version, I wouldn't recommend using JavaScript to resize an image either. I'd recommend setting the image's dimensions in CSS and then attaching a click handler with JS that showed the image in some sort of overlay (or just using HTML to link to the larger image file). | unknown | |
d11382 | train | This helped me in any directories:
{
"presets": ["@babel/react", "minify"],
"ignore": ["../**/*.min.js"]
} | unknown | |
d11383 | train | I'm not sure if this is want you want but if you want to project two 2d-profiles of the 3d-surface you can use offset=z in the contour plot, where z is where in the z-axes you want to plot the profiles. Here is an example: https://matplotlib.org/3.3.1/gallery/mplot3d/contour3d_3.html | unknown | |
d11384 | train | You can post to different routes, regardless of where you currently are. So assuming you have a Razor page SwitchTheme.cshtml with a code-behind that switches the theme on POST, then you can adjust your <form> tag to post to that page:
<form asp-page="/SwitchTheme" method="post">
<!-- … -->
</form>
Note the use of the asp-page tag helper to generate the action attribute with a link to the page.
For changing things like the design, which doesn’t directly have some page content you want to display, you could also use a simple controller that makes the change and then redirects back. Then, you would use the asp-action and asp-controller tag helpers instead:
<form asp-controller="Utility" asp-action="SwitchTheme" asp-route-returnUrl="@Context.Request.Path" method="post">
<!-- … -->
</form>
public class UtilityController : ControllerBase
{
[HttpPost]
public IActionResult SwitchTheme([FromForm] bool isDark, string returnUrl)
{
// switch the theme
// redirect back to where it came from
return LocalRedirect(returnUrl);
}
} | unknown | |
d11385 | train | This works for me in creating the QR code use API:-
https://chart.googleapis.com/chart?chs=300x300&cht=qr&chl={data}
In place of data use the data which you want to convert into a QR code.
Code:-
<div id="data_id">
<div class="col">
<?php
echo '<img src="https://chart.googleapis.com/chart?chs=300x300&cht=qr&chl={data}">';
?>
</div>
</div>
<button id="downloadid" class="btn btn-small btn-primary">Download</button>
Script For the Downloading the QR code image:-
<script>
var d = document.getElementById("downloadid");
d.addEventListener("click", function(e) {
var div = document.getElementById("data_id");
var opt = {
margin: [20, 20, 20, 20],
filename: `filname.pdf`,
image: {
type: 'jpg',
quality: 0.98
},
html2canvas: {
scale: 2,
useCORS: true
},
jsPDF: {
unit: 'mm',
format: 'letter',
orientation: 'portrait'
}
};
html2pdf().from(div).set(opt).save();
});
</script>`enter code here` | unknown | |
d11386 | train | Assuming by "explicit template instantiation" you mean something like
template class Foo<int>; // explicit type instantiation
// or
template void Foo<int>(); // explicit function instantiation
then these must go in source files as they considered definitions and are consequently subject to the ODR.
A: I've always done it in a cpp file. In a header, it would violate the one definition rule, at least (in the usual case) when the header was included in more than one cpp file (though there are ways to avoid that, which can be useful under a few, specific circumstances).
A: Either one.
If you are declaring a specific instance, you might declare it in your cpp file. However, if you are declaring a class member or something that will be referenced from multiple cpp files, that would go in your header file. | unknown | |
d11387 | train | From your code it looks like siteData.theme holds the theme name i.e. 'b' so you can just add:
$('[data-role=page]').page({
theme: siteData.theme
});
To your script.
This will not however change the header and footer from the default theme. You could just in the newPage creation add data-theme=siteData.theme depending on whats in siteData.theme of course. | unknown | |
d11388 | train | I experienced the same problem with an old version of d3 (3.3.3). I'm not sure if it's been fixed since. The problem, as you said, is that Chrome gets in a weird state where a mouse click triggers a mousemove event. Completely closing Chrome and reopening it seems to get it out of this state.
I was able to resolve the issue by checking in d3's mousemove function for actual movement. I added two lines to the moved() function within mousedowned() in d3:
function moved() {
if(d3.event.movementX === 0 && d3.event.movementY === 0) // new
return false; // new
dragged = 1;
translateTo(d3.mouse(target), l);
zoomed(event_);
}
That seems to resolve the problem without having to restart Chrome.
A: My Chrome instance must have gotten into some strange state while I was developing and debugging. I restarted Chrome and tried again, and now it's working fine. | unknown | |
d11389 | train | Looks like adding a RegExp is what is needed. In case anyone else needs this, here is the relevant lib\modules\apostrophe-search\index.js code:
module.exports = {
perPage: 15,
construct: function(self, options) {
self.indexPage = function(req, callback) {
req.query.search = req.query.search || req.query.q;
var allowedTypes;
var defaultingToAll = false;
var cursor = self.apos.docs.find(req, { lowSearchText: new RegExp(req.query.search, 'i') } )
.perPage(self.perPage);
if (self.filters) {
var filterTypes = _.filter(
_.pluck(self.filters, 'name'),
function(name) {
return name !== '__else';
}
);
A: As you know I am the lead developer of Apostrophe at P'unk Avenue.
Your solution does work, however one serious concern with it is that you are not escaping the user's input to prevent regular expression meta-characters like . and * from being interpreted as such. For that you can use apos.utils.regExpQuote(s), which returns the string with the dangerous characters escaped via \.
There is a better way to do this though: just use req.query.autocomplete instead. Apostrophe has a built-in autocomplete cursor filter that works differently from the search filter. The autocomplete filter allows partial matches (although only at the start of a word) and then it feeds the words it finds through the regular search so that results are still sorted by match quality. It also maintains much of the performance benefit of using search.
A regular expression search like yours will scan the entire mongodb collection (well, at least all docs of the relevant type), which means you'll have performance problems if you have a lot of content.
One caveat with autocomplete is that it only "sees" words in high-priority fields like title, tags, etc. It does not see the full text of a doc the way your regex search (or the search filter) can. This was a necessary tradeoff to keep the performance up. | unknown | |
d11390 | train | You guys wont believe what was problem.
I just realized that because text input box had size="150"set, div element above jumped like insane. It's fixed now. | unknown | |
d11391 | train | The whole point of your myNamespace property seems more than just questionable, but if you insist and still need a function that is bound to your class instance, just bind the function in the constructor, or use an arrow function which does not have its own this, but keeps this whatever it pointed to at the time of definition: (code example demonstrates both):
class Sample {
key = '';
constructor(key) {
this.key = key;
this.myNamespace.saySomething = this.myNamespace.saySomething.bind(this);
}
myNamespace = {
saySomething: function(message) {
console.log('message:', message);
console.log('key:', this.key);
}
}
myOtherNamespace = {
saySomething: (message) => {
console.log('message:', message);
console.log('key:', this.key);
}
}
getTheKey() {
console.log('key', this.key);
}
}
let sample = new Sample('thekey');
sample.myNamespace.saySomething('message'); // shows-> key: thekey
sample.myOtherNamespace.saySomething('other message'); // shows-> key: thekey
sample.getTheKey(); // shows-> key: thekey
A: "This" is pointing to two different things. "This" in your namespace function doesn't have a key value, but your constructed "sample" does - your getTheKey function accurately points to the correct "this".
To be more specific, getTheKey points to the constructed sample as "this" and the function within saySomething is pointing to saySomething as "this". The constructed sample has a value for key and saySomething does not.
You can use an arrow function instead, like so:
myNamespace = {
saySomething: (message) => {
console.log('message:', message);
console.log('key:', this.key);
}
to target your constructed sample instead. | unknown | |
d11392 | train | Here the problem was connection timeout due to large database.
Adding $conn.StatementTimeout = 10000 solved me the issue :)
Thanks | unknown | |
d11393 | train | I am going to assume the quotes are coming from the Tumblr API. In this case, you just need to strip the quotes:
<div class="relates">'+titles[i].substring(1, titles[i].length - 2)+'</div>
substring(1, titles[i].length - 2) will remove the first character (index 0) and the last character (index length - 1) from the string.
Docs | unknown | |
d11394 | train | I think that the problem is the relation design. In your example Sale is the main entity, so the transport_invoice reference is not needed. The "sale" reference in TransportInvoice.php is all that doctrine need, so edit Sale as follow an try again.
/**
* @var TransportInvoice
*
* @ORM\OneToOne(targetEntity="WKDA\Common\Entity\Car\TransportInvoice\TransportInvoice", mappedBy="sale")
*/
protected $transportInvoice;
I hope it help you. | unknown | |
d11395 | train | Creating Date objects from strings is unreliable, as you have observed. You should manually parse those strings into Date objects, like this:
// assumes date string is in the format "yyyy-MM-ddTHH:mm:ss"
var dateMatch = dataItem.ItemDate.match(/(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})/);
var year = parseInt(dateMatch[1], 10);
var month = parseInt(dateMatch[2], 10) - 1; // convert to javascript's 0-indexed months
var day = parseInt(dateMatch[3], 10);
var hours = parseInt(dateMatch[4], 10);
var minutes = parseInt(dateMatch[5], 10);
var seconds = parseInt(dateMatch[6], 10);
var date = new Date(year, month, day, hours, minutes, seconds);
A: First, note that the Timeline of the CHAP Links library does not support strings as Date, you should provided Dates or a timestamp with a Number (note that the successor of the Timeline, vis.js, does support strings as date). Strings as Date work nowadays because most browsers now support creating dates from an ISO date string.
The issue you have is because you provide an ISO Date String without time zone information. Apparently not all browsers have the same default behavior in that case. Enter the following in a JavaScript console in both Firefox and an other browser:
new Date("2014-06-09T18:51:37").toISOString()
// output is ambiguous, time zone information missing
and you will see them adding timezone information in different ways. To prevent these kind of ambiguities, you should provide timezone information yourself. To specify the time in UTC, add a Z to the end of the string:
new Date("2014-06-09T18:51:37Z").toISOString()
// output is unambiguous | unknown | |
d11396 | train | In your manifest you are using android:theme="@style/AppTheme.NoActionBar" so the main activity will not contain the action bar and you will get a NullPointerException.
The assert getSupportActionBar() != null throw the Exception because it is always null in this case. In this case the Assert works like:
if(!getSupportActionBar() != null) {
throw new Exception(); //NullPointerException in this case
} | unknown | |
d11397 | train | The words read per minute average is about 250-300, once you know this you just need to:
*
*Get the article word count.
*Divide this number by 275 (more or less).
*Round the result to get a integer number of minutes.
A: According to a study conducted in 2012, the average reading speed of an adult for text in English is: 228±30 words, 313±38 syllables, and 987±118 characters per minute.
You can therefore calculate an average time to read a particular article by counting one of these factors and dividing by that average speed. Syllables per minute is probably the most accurate, but for computers, words and characters are easier to count.
Study Citation:
Standardized Assessment of Reading Performance: The New International Reading Speed Texts IReST by
Susanne Trauzettel-Klosinski; Klaus Dietz; the IReST Study Group, published in Investigative Ophthalmology & Visual Science August 2012, Vol.53, 5452-5461
A: A nice solution here, on how
to get an estimated read time of any article or blog post https://stackoverflow.com/a/63820743/12490386
here's an easy one
*
*Divide your total word count by 200.
*You’ll get a decimal number, in this case, 4.69. The first part of your decimal number is your minute. In this case, it’s 4.
*Take the second part — the decimal points — and multiply that by 0.60. Those are your seconds. Round up or down as necessary to get a whole second. In this case, 0.69 x 0.60 = 0.414. We’ll round that to 41 seconds.
*The result? 938 words = a 4 minute, 41 second read. | unknown | |
d11398 | train | Assuming @admin_user as some user with admin role.
@admin_areas = User.joins(:subscriptions).where('subscriptions.area_id' => @admin_user.administrative_areas.map(&:id))
You can use cancan gem to manage roles and giving controls to different roles.
EDIT : You can try something like below
In user.rb
scope :users_to_manage, lambda{ joins(:subscriptions).where('subscriptions.area_id' => current_user.administrative_areas.map(&:id)) #if you have access current_user in model
In ability.rb
can :manage, Area.users_to_manage | unknown | |
d11399 | train | One advantage of using gulp is that you can then orchestrate more complicated tasks based on that task. For example, you might want a gulp task that first builds the project then executes the unit tests. You could then execute that with one command rather than two commands.
However if you're only running the gulp task that runs karma then there won't be any advantage in using gulp (other than the command being easier to type).
You can look at the Gulp Recipes page to see other tasks you can accomplish with gulp.
A: The only thing I can really think of is getting gulp to source all the appropriate 3rd party JS files required to run the tests using something like wiredep. For example
var gulp = require('gulp'),
Server = require('karma').Server,
bowerDeps = require('wiredep')({
dependencies: true,
devDependencies: true
}),
testFiles = bowerDeps.js.concat([
'src/**/*.js',
'test/**/*.js'
]);
gulp.task('test', function(done) {
new Server({
configFile: 'karma.conf.js',
files: testFiles,
singleRun: true
}, function(exitStatus) {
if (exitStatus) {
console.warn('Karma exited with status', exitStatus);
}
done();
}).start();
});
Otherwise, you have to maintain the files array in your karma.conf.js file manually. | unknown | |
d11400 | train | You can detect it using the following function:
document.addEventListener("keydown", function (event) {
event.stopPropagation();
event.preventDefault();
if(event.ctrlKey && event.keyCode == 81)
{
console.log("CTRL + Q was pressed!");
}
else
{
console.log("Something else was pressed.");
}
});
The stopPropagation() and preventDefault() calls prevent the browser's default behaviour from occurring.
If you want to detect other keys, this page is rather useful: http://asquare.net/javascript/tests/KeyCode.html | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.