_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d13301 | train | Why don't you just create another short-cut on the desktop that starts devenv.exe without elevation?
A: MSDN states:
Launching an Un-Elevated Application from an Elevated Process
A frequently asked question is how to launch an un-elevated application from an elevated process, or more fundamentally, how to I launch a process using my un-elevated token once I’m running elevated. Since there is no direct way to do this, the situation can usually be avoided by launching the original application as standard user and only elevating those portions of the application that require administrative rights. This way there is always a non-elevated process that can be used to launch additional applications as the currently logged on desktop user. Sometimes, however, an elevated process needs to get another application running un-elevated. This can be accomplished by using the task scheduler within Windows Vista®. The elevated process can register a task to run as the currently logged on desktop user.
You could use that and implement a simple launcher app (possibly a VS macro) that you start instead of your application. The launcher app would:
*
*Create a scheduled task that would run immediately or that is triggered by an event
*Start the debuggee
*Connect the VS debugger to the started process
A: The VS debugger is tied to the browser instance that VS is launched, but you can still use with another browser instance to browse the site under test. Actions on the server side will still operate through the debugger (but you'll not get client side debugging—IE8 developer tools and FireBug are still available of course).
A: From your application, call ChangeWindowMessageFilter with the following values to allow dragging and dropping to/from your elevated application and non-elevated applications like Explorer:
ChangeWindowMessageFilter (WM_DROPFILES, MSGFLT_ADD);
ChangeWindowMessageFilter (WM_COPYDATA, MSGFLT_ADD);
ChangeWindowMessageFilter (0x0049, MSGFLT_ADD);
Credit: http://blog.helgeklein.com/2010/03/how-to-enable-drag-and-drop-for.html
A: I use old-school command line for this purpose:
runas /trustlevel:0x20000 "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe"
And then just press F5 in Studio.
A: You can use the elevated Visual Studio to do your programming, you just can't have Visual Studio launch the executable.
I like 0xA3's answer, but if you don't want to go to the trouble of creating a VS macro and a scheduled task to launch your program, you can just create a shortcut on the desktop to the executable (either the debug or release version) and use that to launch your program.
If you launch the debug version, you can use "Attach to Process" in the Visual Studio Debug menu to attach and do some debugging. | unknown | |
d13302 | train | Actually there is no way of doing this without running a foreground service. Having listed in white list may not be appropriate for your application and even though it is, you ask user to give you permission which can be seen as something dangerous from the end user's point of view.
However, I have a trick about this. Listen android's broadcasts and when you catch that device will move into doze mode, start a foreground service. In most of the cases user won't be able to see your foreground notification image and won't know that you are running a service. Because device is in the doze mode meaning it is stable in somewhere user not watching. So you can do whatever is needed.
You also listen broadcasts sent when doze mode is finished. When that happens, stop your foreground service and work in a normal logic of yours with alarm managers.
PowerManager pm = (PowerManager) context.getSystemService(Context.POWER_SERVICE);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if(intent.getAction().equals("android.os.action.DEVICE_IDLE_MODE_CHANGED")){
if (pm.isDeviceIdleMode()) {
startForegroundService();
//stopAlarmManagerLogic();
} else {
stopForegroundService();
//startAlarmManagerLogic();
return;
}
return;
}
}
A: You can request android to whitelist your app for doze mode by sending a high pirority GCM message. But remember this might make your app not approved by Google Play:
Intent intent = new Intent();
String packageName = context.getPackageName();
PowerManager pm = (PowerManager) context.getSystemService(Context.POWER_SERVICE);
if (pm.isIgnoringBatteryOptimizations(packageName))
intent.setAction(Settings.ACTION_IGNORE_BATTERY_OPTIMIZATION_SETTINGS);
else {
intent.setAction(Settings.ACTION_REQUEST_IGNORE_BATTERY_OPTIMIZATIONS);
intent.setData(Uri.parse("package:" + packageName));
}
context.startActivity(intent);
https://developer.android.com/training/monitoring-device-state/doze-standby.html#whitelisting-cases
A: Edit-WakefulBroadcastReceiver is now deprecated
Firstly, instead of directly calling a service in the AlarmManager call a broadcast receiver which then calls the service.
The broadcast receiver should extend a WakefulBroadcastReceiver instead of a regular BroadcastReceiver.
And then, let the broadcast receiver schedule a new Alarm, start the service using startWakefulService() instead of startService()
public class MyAwesomeReceiver extends WakefulBroadcastReceiver {
int interval=2*60*60*1000;
@Override
public void onReceive(Context context, Intent intent) {
Intent serviceIntent = new Intent(context, MyAwesomeService.class);
Intent receiverIntent = new Intent(context, MyAwesomeReceiver.class);
PendingIntent alarmIntent = PendingIntent.getBroadcast(context, 11, receiverIntent, PendingIntent.FLAG_UPDATE_CURRENT);
AlarmManager alarmManager = (AlarmManager) context.getSystemService(ALARM_SERVICE);
alarmManager.setExactAndAllowWhileIdle(AlarmManager.RTC_WAKEUP,System.currentTimeMillis()+interval,alarmIntent);
startWakefulService(context, serviceIntent);
}
}
The WakefulBroadcastReceiver and startWakefulService() will let your app a 10 seconds window to let do what it needs to do.
Also,
You can always ask the user to let your app ignore battery optimization functionality using-
PowerManager powerManager = (PowerManager) getSystemService(Context.POWER_SERVICE);
Intent intent=new Intent();
intent.setFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP);
if (powerManager.isIgnoringBatteryOptimizations(getPackageName())) {
intent.setAction(Settings.ACTION_IGNORE_BATTERY_OPTIMIZATION_SETTINGS);
}
else {
intent.setAction(Settings.ACTION_REQUEST_IGNORE_BATTERY_OPTIMIZATIONS);
intent.setData(Uri.parse("package:" + getPackageName()));
startActivity(intent);
}
and in the manifest
<uses-permission android:name="android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS"></uses-permission>
A: After Android N, no app can run in background forever. However you can use Firebase Job Dispatcher which can help you to run your app even if it is in doze mode. With the help of Firebase Job Dispatcher you can tell the system that your app should run at a particular time if provided conditions are matched. | unknown | |
d13303 | train | I think the problem here is in the conanfile of freeglut. Conan file be default says in packaging:
self.cpp_info.components["freeglut_"].names["pkg_config"] = "freeglut" if self.settings.os == "Windows" else "glut"
So it tries to find freeglut.
At the same time be default conanfile sets "replace_glut": True. From Cmake this option sets:
option to build either as "glut" (ON) or "freeglut" (OFF)
So your library is built as glut, but conanfile still tries to find freeglut.
I would suggest you try to build conan package with option replace_glut=False
If that fixes the issue, pls open a bug here: https://github.com/conan-io/conan-center-index/issues | unknown | |
d13304 | train | If you want/need full text search then lunr could certainly be used for providing an index on the tags in those documents.
If these links had a description, then lunr (or any other full text search) would be a good fit, you really are then doing a full text search.
Tags, to me anyway, imply a faceted search. I would imagine that you would have a finite list of these tags, and you would then want to find which links have these exact tags. You could approximate something similar with lunr, but there are probably better tools for the job.
Now, if you had a large list of tags, potentially with some kind of description, then you could use lunr to allow users to search for tags, and then use those tags to perform the faceted search on your links.
As for using lunr with firebase, as long as lunr has access to all of the link data for indexing, it doesn't care where you actually store the documents. I'm not at all familiar with firebase so can't comment on the practicality of integrating lunr with that service, perhaps someone else can help you out with that aspect. | unknown | |
d13305 | train | You might try instantiating your serializer whenever your view is called by wrapping it in a function (you make a serializer factory):
def like_serializer_factory(type_of_like):
if type_of_like == 'book':
class LikeSerializer(serializers.ModelSerializer):
class Meta:
model = models.Like
fields = ('id', 'created', )
def restore_object(self, attrs, instance=None):
# create Like instance with Book contenttype
elif type_of_like == 'author':
class LikeSerializer(serializers.ModelSerializer):
class Meta:
model = models.Like
fields = ('id', 'created', )
def restore_object(self, attrs, instance=None):
# create Like instance with Author contenttype
return LikeSerializer
Then override this method in your view:
def get_serializer_class(self):
return like_serializer_factory(type_of_like)
A: Solution 1
Basically there is a method you can add on GenericAPIView class called get_context_serializer
By default your view, request and format class are passed to your serializer
DRF code for get_context_serializer
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
return {
'request': self.request,
'format': self.format_kwarg,
'view': self
}
you can override that on your view like this
def get_serializer_context(self):
data = super().get_serializer_context()
# Get the book from post and add to context
data['book'] = self.request.POST.get('book')
return data
And use this on your serializer class
def restore_object(self, attrs, instance=None):
# Get book from context to use
book = self.context.get('book', None)
author = attrs.get('author', None)
if book is not None:
# create Like instance with Book contenttype
pass
elif author is not None:
# create Like instance with Author contenttype
pass
Solution 2
Add a field on your serializer
class LikeSerializer(serializers.ModelSerializer):
# New field and should be write only, else it will be
# return as a serializer data
book = serializers.IntegerField(write_only=True)
class Meta:
model = models.Like
fields = ('id', 'created', )
def save(self, **kwargs):
# Remove book from validated data, so the serializer does
# not try to save it
self.validated_data.pop('book', None)
# Call model serializer save method
return super().save(**kwargs) | unknown | |
d13306 | train | Use $http.post
$http.post(createdUrl)
.then(function(response) {}
A: When sending vulnerable data with post method it's advised to break apart your query string pass it in the request body. Like this:
var myUrlWithoutQueryString = ... // here goes your url
var loginData = {
name: ...
ps: ...
action ...
// and so on
};
$http.post(myUrlWithoutQueryString, loginData);
Notice that the loginData you're passing should match the parsing requirements in the server-side (matching parameteres for instance). | unknown | |
d13307 | train | This is not a huge problem, since the file can be edited like any text document.
If you are at ssh and have root privileges, just nano /etc/passwd (i feel evil typing that haha), otherwise if there is another user with root privileges (other than pi) login in as them and edit the passwd file.
If there are no other users, put your SD card in your mac and edit the file in any text editor.
A: I solved it by uncommenting the line that begins with #pi 1000 1000 (something like that) in the file passwd (access it by typing nano/etc/paswwd) by erasing the #, when I did that, everything returns to normality and I was enable to use sudo commands again. | unknown | |
d13308 | train | For IE < 9 you should use EOT files. WOFF is for modern browsers. The FontSquirrel webfont generator is really helpful for this and will give you the @font-face kit too. | unknown | |
d13309 | train | Finally I found a solution for it. Just bring the jquery select to files to your frontaccounting and include it on header.inc than you have to do one more step into it. Goto Root_of_FA/company/0/js_cache/utilis.js and found the below line
window.scrollTo(0,0);if(!newwin){setFocus();}
}
}, false
around this will be line number : 27-36. than change it like the below one.
window.scrollTo(0,0);if(!newwin){setFocus();}
}
$(".combo").select2({
placeholder: "Select a state",
allowClear: true
});
},false
that's it. It will work for you. If you need more details, check Here.
KV codes Select2 and Frontaccounting | unknown | |
d13310 | train | Thanks great to see you managed to install it.
Sorry, no it exists no bulk actions on hide you should select one by one the element or decide to hide all. | unknown | |
d13311 | train | The UiApp is deprecated. That would be creating the server error. Sometimes the UiApp functions still work (like the chart service) but the output crashes. The googlescript library functionality should still work.
https://developers.google.com/apps-script/reference/ui/ui-app?hl=en
The html service should be a replacement for the UiApp but unfortunately it does not cover all the same functionality.
https://developers.google.com/apps-script/guides/html/ | unknown | |
d13312 | train | You may try to use join like this:
SELECT main.id, s.name AS shoes, g.name AS gloves
FROM tbl AS main
LEFT JOIN tbl s ON main.id = s.id
LEFT JOIN tbl g ON main.id = g.id | unknown | |
d13313 | train | That is because Azure App Service for PHP 8 no longer uses Apache but Nginx. This is related to the question "How to Deploy an App Service in azure with Laravel 8 and PHP 8 without public endpoint?".
As I mentioned there I will mention here too: I've written a full blog article about my first experiences with PHP 8 on Azure App Services which includes the issue you mention here.
Have a look at it and let me know if it solved your struggles. | unknown | |
d13314 | train | If you rewrite the tag/tag model and override the getTaggedProductsUrl() with the following it will work:
public function getTaggedProductsUrl()
{
$fullTargetPath = Mage::getUrl('tag/product/list', array(
'tagId' => $this->getTagId(),
'_nosid' => true
));
$targetPath = substr($fullTargetPath, strlen(Mage::getBaseUrl()));
$rewriteUrl = Mage::getModel('core/url_rewrite')->loadByIdPath($targetPath);
if ($rewriteUrl->getId()) {
return $rewriteUrl->getRequestPath();
}
return $fullTargetPath;
}
This is assuming you are using the target path without the base url as the "ID path" and the "Target Path" property, e.g tag/product/list/tagId/30/.
If you don't want to duplicate that setting then you will need to use the tag resource model and manually adjust the SQL to match the target_path column instead the id_path, because the resource model doesn't come with a method predefined for you.
Still, you can use the Mage_Tag_Model_Resource_Tag::loadByRequestPath()method as a reference. | unknown | |
d13315 | train | You could look at this problem as permutations of 1 and -1, which while being summed together, the running sum (temp value while adding up numbers left-to-right) must not become less than 0, or in case of parenthesis, must not use more right-parenthesis than left ones.
So:
If n=1, then you can only do 1+(-1).
If n=2, then can do 1+(-1)+1+(-1) or 1+1+(-1)+(-1).
So, as you create these permutations, you see you can only use only one of the two options - either 1 or -1 - in every recursive call, and by keeping track of what has been used with nLeft and nRight you can have program know when to stop. | unknown | |
d13316 | train | You don't have to. But in windows you have to explicitly state you want the class to export symbols with _declspec(dllexport) (which is probably what that macro expands to). | unknown | |
d13317 | train | You're mixing older Oracle join syntax with newer ANSI join syntax which is a bit confusing, and might trip up the optimiser; but the main problem is that you have an inner join between your generated date list and your gl table; and you then also have a join condition in the where clause which keeps it as an inner join even if you change the join type.
Without the table structures or any data, I think you want:
...
FROM (
Select first_date + Level-1 dt
From
(
Select trunc(sysdate) first_date,
trunc(sysdate)+60 last_date
from dual
)
Connect By Level <= (last_date - first_date) +1
) studate
CROSS JOIN GPS_RESERVATION gr
LEFT OUTER JOIN GPS_RESERVATION_LOAD gl
ON gl.work_center_no = 'ALIN'
AND gl.duedate = studate.dt
AND gl.plant = 'W'
AND gl.RESERVATION_NO = gr.RESERVATION_NO
WHERE gr.ACTIVE_FLAG = 'Y'
AND gr.reservation_no = '176601'
ORDER BY
gl.DUEDATE
The cross-join gives you the cartesian product of the generated dates and the matching records in gr; so if your gr filter finds 5 rows, you'll have 300 results from that join. The left outer join then looks for any matching rows in gl, with all the filter/join conditions related to gl within that on clause.
You should look at the execution plans for your query and this one, firstly to see the difference, but more importantly to check it is joining and filtering as you expect and in a sensible and cost-effective way. And check the results are correct, of course... You might also want to look at a version that uses a left outer join but keeps your original where clause, and see that that makes it go back to an inner join. | unknown | |
d13318 | train | Inner AsyncTask in a seperate file This is the given situation:
public class MainActivity extends BaseActivity Implements Foo, Bar {
... not so much code ...
private class GetSomeStuff extends AsyncTask<String, Integer, Boolean> {
... lot of code ...
}
private class GetOtherStuff extends AsyncTask<String, Integer, Boolean> {
... more code ...
}
}
This situation makes my MainActivity very unreadable. This is a single activity app with many fragments, so the main activity only does the navigation and fragment management stuff. Is it possible to define these AsyncTask classes in another file so my code is more readable, keeping the accessibility?
thanks! | unknown | |
d13319 | train | A couple of thoughts come to mind.
*
*The connection string you are using is the connection string if you are connecting to a SQL Server DB instead of a MySQL .Net Connector connection string. You're going to have to change the connection string example from the book to what MySQL needs. For example, MySQL doesn't have a Trusted Connection option. (http://dev.mysql.com/doc/refman/5.1/en/connector-net-connection-options.html)
*In your SQLProductsRepository you'll need to use the MySQLConnection Object and so forth instead of a SQLConnection object if used. I think the book used Linq To SQL or Linq to Entity Framework. Either way you'll have to modify the code to use the MySQL... objects or modify the model to hit your MySQL if it's Entity Framework. If you already did this then the connection string in it will give a good idea of what you need in #1.
http://dev.mysql.com/doc/refman/5.1/en/connector-net-tutorials-intro.html
EDIT - With your update...
MySQL doesn't have the same syntax as SQL Server sometimes. I am assuming that is what this error is pointing to.
On the MySQL Connector connection string, it has an option to use SQL Server Syntax called SQL Server Mode. Set this to true and see if this works....
Something like:
Server=serverIP;Database=dbName;Uid=Username;Pwd=Password;SQL Server Mode=true; | unknown | |
d13320 | train | To filter out nodes having inner text exactly equals certain value, use = operator wrapped in not() instead of using contains() in the same way :
/collection/device/dcs/dc/nodes/node[not(name='test')] | unknown | |
d13321 | train | The selector must be binded to some ng-model, which keep track of the current value. Supposing you have something like this:
<select ng-model="FooBar">
Some options... (with their respectives value)
</select>
you can enable or desable multiple buttons using this var "FooBar":
<input ng-disabled="FooBar !== desiredEnableValue"> | unknown | |
d13322 | train | If you want to dynamically load google's libraries, you should check out google's autoloader:
http://code.google.com/apis/ajax/documentation/#AutoLoading
It works quite nicely, but be careful if you use the autoloader wizard.
http://code.google.com/apis/ajax/documentation/autoloader-wizard.html
there's a bug for the c&p code that tripped me up:
http://code.google.com/p/google-ajax-apis/issues/detail?id=244
Also I found that for some of google's libraries, if I try to asynchronously load scripts (like yours) that if I don't specify some of the optional parameters (language, callback, etc. -- even with an empty string), I'll see the behavior that you're seeing.
Edit: went ahead and tested it. Your solution here:
http://pastie.org/486925 | unknown | |
d13323 | train | This was asked and answered in the following issue on spaCy's GitHub.
It looks like the script no longer worked after a refactor of the entity linking pipeline as it now expects either a statistical or rule-based NER component in the pipeline.
The new script adds such an EntityRuler to the pipeline as an example. I.e.,
# Add a custom component to recognize "Russ Cochran" as an entity for the example training data.
# Note that in a realistic application, an actual NER algorithm should be used instead.
ruler = EntityRuler(nlp)
patterns = [{"label": "PERSON", "pattern": [{"LOWER": "russ"}, {"LOWER": "cochran"}]}]
ruler.add_patterns(patterns)
nlp.add_pipe(ruler)
However, this can be replaced with your own statistical NER model. | unknown | |
d13324 | train | For continues loader
timer = Timer.scheduledTimer(timeInterval: 0.001, target: self, selector: #selector(setProgress), userInfo: nil, repeats: true)
and
func setProgress() {
time += 0.001
downloadProgressBar.setProgress(time / 3, animated: true)
if time >= 3 {
self.time = 0.001
downloadProgressBar.progress = 0
let color = self.downloadProgressBar.progressTintColor
self.downloadProgressBar.progressTintColor = self.downloadProgressBar.trackTintColor
self.downloadProgressBar.trackTintColor = color
}
A: Edit: A simple 3 second UIView animation (Recommended)
If your bar is just moving smoothly to indicate activity, possibly consider using a UIActivityIndicatorView or a custom UIView animation:
override func viewDidAppear(animated: Bool)
{
super.viewDidAppear(animated)
UIView.animateWithDuration(3, animations: { () -> Void in
self.progressView.setProgress(1.0, animated: true)
})
}
Make sure your progressView's progress is set to zero to begin with. This will result in a smooth 3 second animation of the progress.
Simple animated progress (Works but still jumps a bit)
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIProgressView_Class/#//apple_ref/occ/instm/UIProgressView/setProgress:animated:
func setProgress() {
time += 0.1
progressView.setProgress(time / 3, animated: true)
if time >= 3 {
timer!.invalidate()
}
}
Option with smaller intervals. (Not recommended)
Set your timer to a smaller interval:
timer = NSTimer.scheduledTimerWithTimeInterval(0.001, target: self, selector:Selector("setProgress"), userInfo: nil, repeats: true)
Then update your function
func setProgress() {
time += 0.001
progressView.setProgress(time / 3, animated: true)
if time >= 3 {
timer!.invalidate()
}
}
A: It's hard to say exactly what the problem is. I would like to see the output if you put a print line in setProgress to print a timestamp. Is it actually firing every tenth of a second? My guess is that it is not.
Why not? Well, the timer schedules a run loop task in the main thread to execute the code in setProgress. This task cannot run until tasks in front of it in the queue do. So if there are long running tasks happening in your main thread, your timer will fire very imprecisely. My first suggestion is that this is perhaps what is happening.
Here is an example:
*
*You start a timer to do something every second.
*Immediately after, you start a long running main thread task (for example, you try to write a ton of data to a file). This task will take five seconds to complete.
*Your timer wants to fire after one second, but your file-writing is
hogging the main thread for the next four seconds, so the timer can't fire
for another four seconds.
If this is the case, then to solve the problem you would either need to move that main thread work to a background thread, or else figure out a way to do it while returning to the run loop periodically. For example, during your long running main thread operation, you can periodically call runUntilDate on your run loop to let other run loop tasks execute.
Note that you couldn't just increment the progress bar fill periodically during the long running main thread task, because the progress bar will not actually animate its fill until you return to the run loop.
A: What about proper way for animating changes: animateWithDuration:animations: or CABasicAnimation. You can use this for creating smooth animations | unknown | |
d13325 | train | If the names are starting in A2 with a heading in A1, you can just use this starting in C2 and pulled down:-
=COUNTIF(A$1:A1,A2)=0
to get true/false values, or
=--(COUNTIF(A$1:A1,A2)=0)
to get ones and zeroes. | unknown | |
d13326 | train | If someone faces the same question, this is the solution.
In the CSS file, start the commands with a specific string (below) and then, on Markdown, use the div notation. Such as:
<div class="question">
1. My first question
1. My second question
</div>
/* -----------Question counter ---------*/
body {
counter-reset: li;
}
h1 {
font-family:Arial, Helvetica, sans-serif;
font-weight:bold;
}
h2 {
font-family:Arial, Helvetica, sans-serif;
font-weight:bold;
margin-top: 24px;
}
.question ol {
margin-left:0; /* Remove the default left margin */
padding-left:0; /* Remove the default left padding */
}
.question ol>li {
position:relative; /* Create a positioning context */
margin:0 0 10px 2em; /* Give each list item a left margin to make room for the numbers */
padding:10px 80px; /* Add some spacing around the content */
list-style:none; /* Disable the normal item numbering */
border-top:2px solid #317EAC;
background:rgba(49, 126, 172, 0.1);
}
.question ol>li:before,
.question ol>p>li:before {
content:"Questão " counter(li); /* Use the counter as content */
counter-increment:li; /* Increment the counter by 1 */
/* Position and style the number */
position:absolute;
top:-2px;
left:-2em;
-moz-box-sizing:border-box;
-webkit-box-sizing:border-box;
box-sizing:border-box;
width:7em;
/* Some space between the number and the content in browsers that support
generated content but not positioning it (Camino 2 is one example) */
margin-right:8px;
padding:4px;
border-top:2px solid #317EAC;
color:#fff;
background:#317EAC;
font-weight:bold;
font-family:"Helvetica Neue", Arial, sans-serif;
text-align:center;
}
.question ol ol {
counter-reset: subitem;
}
.question li ol,
.question li ul {margin-top:6px;}
.question ol ol li:last-child {margin-bottom:0;} | unknown | |
d13327 | train | Instead of creating a temporary table and using a cursor to update each set of columns, this is a set based solution that avoids all of that.
For the first part, if you just need 7 days then you can use a simple values tally table with a common table expression and the Table Value Constructor (Transact-SQL):
declare @fromdate date = '20170605';
;with dates as (
select
[Date]=convert(date,dateadd(day,rn-1,@fromdate))
, rn
from (values (1),(2),(3),(4),(5),(6),(7)) t(rn)
)
Otherwise, you can generate an adhoc table of dates using stacked ctes in a common table expression like this:
declare @fromdate date = '20170605';
declare @thrudate date = dateadd(day,6,@fromdate)
;with n as (select n from (values(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) t(n))
, dates as (
select top (datediff(day, @fromdate, @thrudate)+1)
[Date]=convert(date,dateadd(day,row_number() over(order by (select 1))-1,@fromdate))
, rn = row_number() over(order by (select 1))
from n as deka cross join n as hecto cross join n as kilo
cross join n as tenK cross join n as hundredK
order by [Date]
)
Then cross join the dates with roommaster, and left join both the checkin and booking tables to see if a room has a reservation or is currently occupied:
, cte as (
select
rm.roomid
, d.rn
, Value = case
when rcd.roomid is not null then 1
when rbd.roomid is not null then 8
else 0
end
from dates d
cross join roommaster rm
left join roomcheckindetails rcd
on rm.roomid = rcd.roomid
and d.date >= rcd.checkindate
and d.date <= rcd.checkoutdate
left join roombookingdetails rbd
on rm.roomid = rbd.roomid
and d.date >= rbd.expectedcheckindate
and d.date <= rbd.expectedcheckoutdate
where rm.roommasterstatus <> 99
)
Then for the last piece, you can use conditional aggregation or pivot() (pick one) like so:
select
roomid
, Day1 = min(case when rn = 1 then value end)
, Day2 = min(case when rn = 2 then value end)
, Day3 = min(case when rn = 3 then value end)
, Day4 = min(case when rn = 4 then value end)
, Day5 = min(case when rn = 5 then value end)
, Day6 = min(case when rn = 6 then value end)
, Day7 = min(case when rn = 7 then value end)
from cte
group by roomid
select
roomid
, Day1 = [1]
, Day2 = [2]
, Day3 = [3]
, Day4 = [4]
, Day5 = [5]
, Day6 = [6]
, Day7 = [7]
from cte
pivot (min(value) for rn in ([1],[2],[3],[4],[5],[6],[7]))p
rextester demo with conditional aggregation: http://rextester.com/RUJ98491
rextester demo with pivot(): http://rextester.com/YNKU89188
both return the same results for my demo data:
+--------+------+------+------+------+------+------+------+
| roomid | Day1 | Day2 | Day3 | Day4 | Day5 | Day6 | Day7 |
+--------+------+------+------+------+------+------+------+
| 2 | 1 | 1 | 1 | 1 | 0 | 0 | 0 |
| 3 | 0 | 8 | 8 | 8 | 8 | 0 | 0 |
+--------+------+------+------+------+------+------+------+
Number and Calendar table reference:
*
*Generate a set or sequence without loops - 2 - Aaron Bertrand
*The "Numbers" or "Tally" Table: What it is and how it replaces a loop - Jeff Moden
*Creating a Date Table/Dimension in sql Server 2008 - David Stein
*Calendar Tables - Why You Need One - David Stein
*Creating a date dimension or calendar table in sql Server - Aaron Bertrand | unknown | |
d13328 | train | I was able to clone my repo on local machine with this code. On cloud function I got connection time up
def hello_world(request):
from google.cloud import storage
from git import Repo
import os
storage_client = storage.Client()
path_temp = '/tmp'
path_clone = path_temp + '/clone'
try:
os.mkdir(path_clone)
except OSError:
print ("Creation of the directory failed")
else:
print ("Successfully created the directory ")
url = 'ssh://[email protected]@source.developers.google.com:2022/p/user/r/myrepo'
bucket = storage_client.get_bucket('source-marian')
blob = bucket.blob('id_rsa')
envFile = path_temp + '/id_rsa'
print ("Successfully reading rsa ")
blob.download_to_filename(envFile)
print ("Download the file from bucket")
with open (envFile, 'r') as f :
print(f.read())
ssh_cmd ='ssh -i '+ envFile
print(ssh_cmd)
print (url)
print (path_clone)
print(env)
try:
print("start git clone")
git.Repo.clone_from(url, path_clone , branch='master', env={'GIT_SSH_COMMAND': ssh_cmd})
print ("end git clone")
print("cloning completed 200")
for x in os.listdir(path_clone):
print("List of files")
print(x)
except Exception as e:
print(str(e))
And added a requirements.txt which has:
gitpython
google-cloud-storage | unknown | |
d13329 | train | First, I presume you're not asking about a particular subprocess that exists simply to tell you the current working directory and do nothing else (Apducer's answer). If that were the case you could simply as os.getcwd() and forget the subprocess. You clearly already know that. So you must be dealing with some other (arbitrary?) subprocess.
Second, I presume you understand, via dr1fter's answer, that you have control over the working directory in which the subprocess starts. I suspect that's not enough for you.
Rather, I suspect you're thinking that the subprocess might, according to its own internal logic, have changed its working directory sometime since its launch, that you can't predict where it has ended up, and you want to be able to send some sort of signal to the subprocess at an arbitrary time, to interrogate it about where it's currently working. In general, this is only possible if the process has been specifically programmed with the logic that receives such a signal (through whatever route) and issues such a response. I think that's what SuperStew meant by the comment, "isn't that going to depend on the subprocess?"
I say "in general" because there are platform-specific approaches. For example, see:
*
*windows batch command to determine working directory of a process
*How do I print the current working directory of another user in linux?
A: by default, subprocesses you spawn inherit your PWD. you can however, specify the cwd argument to the subprocess.Popen c'tor to set a different initial PWD.
A: Unix (Linux, MacOS):
import subprocess
arguments = ['pwd']
directory = subprocess.check_output(arguments)
Windows:
import subprocess
arguments = ['cd']
directory = subprocess.check_output(arguments)
If you want to run in both types of OS, you'll have to check the machine OS:
import os
import subprocess
if os.name == 'nt': # Windows
arguments = ['cd']
else: # other (unix)
arguments = ['pwd']
directory = subprocess.check_output(arguments) | unknown | |
d13330 | train | If you take a look at the API docs for the select, you can see that MatSelect is exported as matSelect. This means that you can get a reference to the select in the template by writing #t="matSelect". If you then pass this #t into your function, you can update the select's value like this:
mainValuesChanged(term, event:MatSelectChange, select: MatSelect){
if (event.value==="newTeam"){
select.writeValue(getPrevTeamNamefromAService());
}
}
Hope this helps. | unknown | |
d13331 | train | Outbound E-mail doesn't specifically handle SSL, an enhancement request has been created to remedy this. However, Outbound E-mail uses Chilkat software, whose documentation seems to indicate that SSL should be chosen automatically when using port 995 (though I've not tried this). The above shown configuration doesn't actually contain a user / password, I imagine you've actually tried it with credentials? | unknown | |
d13332 | train | Never do option number 1 the way you do it. Instead of creating a bitmap out of a drawable every time you want to draw it, create a bitmap in the first place. That is, don't create a Drawable if you are going to draw a bitmap. Create a bitmap like this:
mBitmap = BitmapFactory.decodeResource(mContext.getResources(), R.drawable.myImage);
mBitmap = Bitmap.createScaledBitmap(mBitmap, width, height, true);
And this is something you do just once. After that, just draw like you do (canvas.drawbitmap()).
As for option number 2, you are doing it correctly.
Now, there are some differences.
Option 1 is faster to draw and usually good for background images. There is a significant change to FPS depending on if you draw a bitmap or drawable. Bitmaps are faster.
Option 2 is the way to go if you need to things like scaling, moving and other kinds of manipulations of the image. Not as fast but there's no other option if you want to do any of those things just mentioned.
Hope this helps! | unknown | |
d13333 | train | Command="{Binding RemoveApplicationCommand}"
Did you mean RemoveServiceCommand?
A: Turns out the debugger was going over RemoveService the entire time but I had not put a breakpoint there. I had a wrong name in my RemoveService implementation ServerList.Remove() should have been ServiceList.Remove(). I assumed the debugger would hit a breakpoint in the RemoveServiceCommand property's getter but it turns out it doesn't hit that when you click the button.
A: You're returning a new RelayCommand in your getting, but not saving / caching the instance. Save it in a member variable.
if (_cmd == null)
_cmd = new ....
return _cmd;
A: Try implementing like this
private ICommand finishCommand;
public ICommand FinishCommand
{
get
{
if (this.finishCommand == null)
{
this.finishCommand = new RelayCommand<object>(this.ExecuteFinishCommand, this.CanExecutFinishCommand);
}
return this.finishCommand;
}
}
private void ExecuteFinishCommand(object obj)
{
}
private bool CanExecutFinishCommand(object obj)
{
return true;
} | unknown | |
d13334 | train | Right, this is because the type is dynamic. That basically means the meaning of the float cast depends on the execution-time type of the value.
The is operator is checking whether float and double are the same type - and they're not, which is why it's returning false.
However, there is an explicit conversion from double to float, which is why the cast is working. I wouldn't use that MSDN C# reference to work out the finer details of how the language behaves - the language specification is normally a better bet.
A: Not sure, but I guess it is because there is no implicit conversion. See Implicit Numeric Conversions Table (C# Reference)
A: You explicitly casting double value to float variable. It is perfectly fine.
When you checking the type, it is exact type match.
What you need is:
if (value is double || value is float)
{
myVariable = (float)value;
} | unknown | |
d13335 | train | The problem probably lies in how you load your script, as there seem to be nothing wrong with it. You have to ensure that when the code is run, all of the required DOM elements are already rendered by the browser.
You can use window.onload handler (or it's .addEventListener() equivalent) to call your initialization script after the page has been fully loaded:
window.onload = function() {
function openNav() {
document.getElementById("mySidenav").style.width = "250px";
}
function closeNav() {
document.getElementById("mySidenav").style.width = "0";
}
var open = document.getElementById("openNav");
var close = document.getElementById("closeNav");
open.addEventListener("click", openNav, false);
close.addEventListener("click", closeNav, false);
};
You can see this code piece in action here: https://jsfiddle.net/sLLLtb3x/
(Please note that I've switched how JavaScript code is loaded - it's set to no wrap - in head to simulate described behavior of code loading).
As for the question of using functions in "external scripts" - in general, yes, you can do that, but in your case, your functions are wrapped in Immediately Invoked Functional Expression, which prevents your functions from leaking to the global scope. You can either manually assign them to some global variable, or you could remove the IIFE (keep in mind that you still need the onload behavior).
But as I've said already, your code example is fine, as you are assigning the event handler within the code piece itself (which is totally fine). | unknown | |
d13336 | train | Did you checked with the [AllowAnonymous] attribute. If you put this attribute on your "Action" that you want to be open access, then anyone can access the method without login.
If you want to allow access to a specific site/source, then please add this in your web.config file inside
<system.webServer></system.webServer>.
<httpProtocol>
<customHeaders>
<remove name="Access-Control-Allow-Origin" />
<add name="Access-Control-Allow-Origin" value="*" />
</customHeaders>
See here * means you are allowing everyone. You can place your site address instead of * that only you want to access your api.
Let me know is that what you want or anything else.
A: When your site renders your page, include an encrypted json token in the page. The token will include a timestamp. When your page calls your api, include the token. Your service decrypts the token, validates the timestamp is less that NN minutes. If either of these fail, no business.
This way only your site will know how to create the correct token required to call your api. Tokens can not be replayed if the timestamp age is short enough.
You can get creative as to the structure/content of the token and how you encrypt, AES is a good choice. | unknown | |
d13337 | train | There is no public equivalent to SymGetLineFromAddr64 on OS X, but you can get source file and line number with the atos(1) developer tool.
Here is some sample code to get a fully symbolized backtrace.
#import <Foundation/Foundation.h>
static NSArray * Backtrace(NSArray *callStackReturnAddresses)
{
NSMutableArray *backtrace = [NSMutableArray new];
for (NSNumber *address in callStackReturnAddresses)
{
NSString *hexadecimalAddress = [NSString stringWithFormat:@"0x%0*lx", (int)sizeof(void*) * 2, address.unsignedIntegerValue];
NSTask *task = [NSTask new];
NSPipe *pipe = [NSPipe pipe];
task.launchPath = @"/usr/bin/xcrun";
task.arguments = @[ @"atos", @"-d", @"-p", @(getpid()).description, hexadecimalAddress ];
task.standardOutput = pipe;
NSString *symbol = @"???";
@try
{
[task launch];
NSData *data = [pipe.fileHandleForReading readDataToEndOfFile];
symbol = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding];
symbol = [symbol stringByTrimmingCharactersInSet:[NSCharacterSet newlineCharacterSet]];
[task waitUntilExit];
}
@catch (NSException *exception) {}
[backtrace addObject:[NSString stringWithFormat:@"%@ %@", hexadecimalAddress, symbol]];
}
return [backtrace copy];
}
static void test(void)
{
NSLog(@"%@", Backtrace([NSThread callStackReturnAddresses]));
}
int main(int argc, const char * argv[])
{
test();
return 0;
}
Running this code will produce the following output:
0x000000010000134e test (in a.out) (main.m:31)
0x000000010000131b main (in a.out) (main.m:36)
0x00007fff8c0935fd start (in libdyld.dylib) + 1 | unknown | |
d13338 | train | It makes sense to create a new subfolder for your screenshots every time u start the script. Use a unique name for that for example the current timestamp:
const puppeteer = require('puppeteer');
const fs = require('fs');
const path = require('path');
const currentDate = new Date();
const timestamp = currentDate.getTime()+'';
const screensDir = path.resolve(path.join(__dirname, timestamp, 'Leipzig'));
if (!fs.existsSync(screensDir)) {
fs.mkdirSync(screensDir, {recursive: true});
}
async function start() {
const browser = await puppeteer.launch({
defaultViewport: {width: 1920, height: 1080}
});
page = await browser.newPage();
await page.goto("https://spritpreisalarm.de")
await page.type("#searchform > table > tbody > tr > td:nth-child(1) > input[type=text]","Leipzig");
await page.click("#searchform > table > tbody > tr > td:nth-child(4) > input");
await page.goto('https://spritpreisalarm.de/preise/station/7450/LEIPZIG/1/1', {waitUntil: 'domcontentloaded'});
await page.screenshot({path: screensDir + '/totalLE.png'});
await page.goBack('https://spritpreisalarm.de/preise')
await page.goto('https://spritpreisalarm.de/preise/station/8731/LEIPZIG/1/1', {waitUntil: 'domcontentloaded'});
await page.screenshot({path: screensDir + '/ARALMstr.png'});
await page.goBack('https://spritpreisalarm.de/preise');
//...
await browser.close()
}
start()
A: I would simply add the date to the file name so you can do two actions in one ;p.
*
*have the date
*no overwriting
let date = new Date().toJSON().slice(0,10)
await page.screenshot({path: screensDir + `/${date}ARALMstr.png`});
await page.goBack('https://spritpreisalarm.de/preise');
I didn't run it, but you understand the idea. | unknown | |
d13339 | train | Try =IF(B1>=A1,B1*0.12,0)
check snap below
And for formatting you can select column, right click - Format Cells - select Number and keep 2 Decimal places. | unknown | |
d13340 | train | RISE is using v1.0.0 of reveal.js-chalkboard
This version has color option informed as an array of colors where the first color gives pen color and the second color gives board drawings color.
For example,
"rise": {
"chalkboard": {
"color": ["rgb(250, 250, 250)", "rgb(250, 250, 250)"]
},
"enable_chalkboard": true
} | unknown | |
d13341 | train | You can execute any SQL you want in a migration using connection.execute, for example:
def up
connection.execute(%q{
alter table t add constraint c check (x in ('a', 'b', 'c'))
})
end
def down
connection.execute('alter table t drop constraint c')
end
You can also use foreigner to add proper FK support to your migrations and schema.rb if you don't want to manage your FKs through raw SQL.
You can use the :unique => true option to add_index to get unique constraints/indexes.
I've done all of this and even added functions (both SQL and Pl/pgSQL) and triggers to a dedicated PostgreSQL database at Heroku. I'm not sure how much is supported on the shared databases but unique indexes certainly will be and I'm pretty sure FKs and CHECKs will be available as well. | unknown | |
d13342 | train | Same thing happened to me and I had generated hash from debugkey for debugging in dev enviroment but when building it for Google Play the problem was appearing. You need to generate hash with the certificate and alias that you will sign the app for publishing in Google Play.
Edit: You need to add key hash for both debug key and release key. | unknown | |
d13343 | train | Your question is very open ended. Before preprocessing and fitting the model, you need to understand Object Detection. Once you understand what object detection you will get answer to your 1st question whether you are required to manually crop every 13000 image. The answer is no. However, you will have to draw bounding boxes around faces and assign label to images if they are not available in the training data.
Your second question is very vague . What do you mean by exact procedure? Is it the steps you need to do or how to do preprocessing and fitting of the model in python/or any other language? There are lots of references available on the internet about how to do preprocessing and model training for every specific problem. There are no universal steps which can be applied to any problem | unknown | |
d13344 | train | If you want to get information for single/specific drive at your local machine. You can do it as follow using DriveInfo class:
//C Drive Path, this is useful when you are about to find a Drive root from a Location Path.
string path = "C:\\Windows";
//Find its root directory i.e "C:\\"
string rootDir = Directory.GetDirectoryRoot(path);
//Get all information of Drive i.e C
DriveInfo driveInfo = new DriveInfo(rootDir); //you can pass Drive path here e.g DriveInfo("C:\\")
long availableFreeSpace = driveInfo.AvailableFreeSpace;
string driveFormat = driveInfo.DriveFormat;
string name = driveInfo.Name;
long totalSize = driveInfo.TotalSize;
A: For most information, you can use the DriveInfo class.
using System;
using System.IO;
class Info {
public static void Main() {
DriveInfo[] drives = DriveInfo.GetDrives();
foreach (DriveInfo drive in drives) {
//There are more attributes you can use.
//Check the MSDN link for a complete example.
Console.WriteLine(drive.Name);
if (drive.IsReady) Console.WriteLine(drive.TotalSize);
}
}
}
A: What about mounted volumes, where you have no drive letter?
foreach( ManagementObject volume in
new ManagementObjectSearcher("Select * from Win32_Volume" ).Get())
{
if( volume["FreeSpace"] != null )
{
Console.WriteLine("{0} = {1} out of {2}",
volume["Name"],
ulong.Parse(volume["FreeSpace"].ToString()).ToString("#,##0"),
ulong.Parse(volume["Capacity"].ToString()).ToString("#,##0"));
}
}
A: Use System.IO.DriveInfo class
http://msdn.microsoft.com/en-us/library/system.io.driveinfo.aspx
A: Check the DriveInfo Class and see if it contains all the info that you need.
A: In ASP .NET Core 3.1, if you want to get code that works both on windows and on linux, you can get your drives as follows:
var drives = DriveInfo
.GetDrives()
.Where(d => d.DriveType == DriveType.Fixed)
.Where(d => d.IsReady)
.ToArray();
If you don't apply both wheres, you are going to get many drives if you run the code in linux (e.g. "/dev", "/sys", "/etc/hosts", etc.).
This is specially useful when developing an app to work in a Linux Docker container. | unknown | |
d13345 | train | You need to check for null.
public static boolean validAnswer(String answer) {
if (answer!=null && (answer.equalsIgnoreCase("y") || answer.equalsIgnoreCase("n"))) {
return true;
}
return false;
}
or an other unconventional way.
public static boolean validAnswer(String answer) {
if ("y".equalsIgnoreCase(answer) || "n".equalsIgnoreCase(answer))) {
return true;
}
return false;
}
You need to fix
do {
System.out.print("\nWould you like to buy a vowel?: ");
answer = stdIn.nextLine();
if ("y".equalsIgnoreCase(answer)) {
getVowel(stdIn, vGuess);
} else if ("n".equalsIgnoreCase(answer)){
break;
}
} while (!validAnswer(answer)); | unknown | |
d13346 | train | Visual Studio 2015 is not producing the correct result for:
F{a}
The result should be a prvalue(gcc and clang both have this result) but it is producing an lvalue. I am using the following modified version of the OP's code to produce this result:
#include <iostream>
class F {
public:
F(int n, int d) :n_(n), d_(d) {};
F(const F&) = default ;
F& operator *= (const F&){return *this; }
F& operator *= (int) { return *this; }
int n() const { return n_ ; }
int d() const { return d_ ; }
int n_, d_ ;
};
template<typename T>
struct value_category {
static constexpr auto value = "prvalue";
};
template<typename T>
struct value_category<T&> {
static constexpr auto value = "lvalue";
};
template<typename T>
struct value_category<T&&> {
static constexpr auto value = "xvalue";
};
#define VALUE_CATEGORY(expr) value_category<decltype((expr))>::value
int main()
{
const F a{3, 7};
const F b{5, 10};
std::cout << "\n" << VALUE_CATEGORY( F{a} ) << "\n";
}
Hat tip to Luc Danton for the VALUE_CATEGORY() code.
Visual Studio using webcompiler which has a relatively recent version produces:
lvalue
which must be const in this case to produce the error we are seeing. While both gcc and clang (see it live) produce:
prvalue
This may be related to equally puzzling Visual Studio bug std::move of string literal - which compiler is correct?.
Note we can get the same issue with gcc and clang using a const F:
using cF = const F ;
auto result = cF{a} *= b;
so not only is Visual Studio giving us the wrong value category but it also arbitrarily adding a cv-qualifier.
As Hans noted in his comments to your question using F(a) produces the expected results since it correctly produces a prvalue.
The relevant section of the draft C++ standard is section 5.2.3 [expr.type.conv] which says:
Similarly, a simple-type-specifier or typename-specifier followed by a braced-init-list creates a temporary
object of the specified type direct-list-initialized (8.5.4) with the specified braced-init-list, and its value is
that temporary object as a prvalue.
Note, as far as I can tell this is not the "old MSVC lvalue cast bug". The solution to that issue is to use /Zc:rvalueCast which does not fix this issue. This issue also differs in the incorrect addition of a cv-qualifier which as far as I know does not happen with the previous issue.
A: My thoughts it's a bug in VS2015, because if you specify user defined copy constructor:
F(const F&);
or make variable a non-const code will be compiled successfully.
Looks like object's constness from a transferred into newly created object.
A: Visual C++ has had a bug for some time where an identity cast doesn't produce a temporary, but refers to the original variable.
Bug report here: identity cast to non-reference type violates standard
A: From http://en.cppreference.com/w/cpp/language/copy_elision:
Under the following circumstances, the compilers are permitted to omit the
copy- and move-constructors of class objects even if copy/move constructor
and the destructor have observable side-effects.
.......
When a nameless temporary, not bound to any references, would be moved or
copied into an object of the same type (ignoring top-level cv-
qualification), the copy/move is omitted. When that temporary is
constructed, it is constructed directly in the storage where it would
otherwise be moved or copied to. When the nameless temporary is the
argument of a return statement, this variant of copy elision is known as
RVO, "return value optimization".
So the compiler has the option to ignore the copy (which in this case would act as an implicit cast to non-const type). | unknown | |
d13347 | train | Your form_for needs to be namespaced too:
<%= form_for [:admin, @country] do |f| %>
...
<% end %>
When you pass @country to form_for it's not going to know what namespace you want this form to go to and so it will default to just the standard POST /countries URL. | unknown | |
d13348 | train | Yes You can do like this
if($request->hasFile('image')) {
$file = Input::file('image');
//getting timestamp
$timestamp = str_replace([' ', ':'], '-', Carbon::now()->toDateTimeString());
$name = $timestamp. '-' .$file->getClientOriginalName();
$blog->image = $name;
$file->move(public_path().'/../public_html/img/blog/', $name);
} | unknown | |
d13349 | train | This will give you the commit being merged:
git rev-parse MERGE_HEAD
I do not think that there is a way to find the branch name other than guessing with a command like:
git for-each-ref | grep ^$(git rev-parse MERGE_HEAD)
(which finds all branches pointing to the commit you are merging)
Note that the commit being merged does not have to be a branch, one can also merge a commit directly like git merge deadbeef.
In the case of octopus merge, there is more than one commit being merged at the same time, and MERGE_HEAD is not present.
If you are to extract it from the merge message, then using .git/MERGE_MSG is safer than .git/COMMIT_EDITMSG, since it is less likely to be hand-edited.
The message is generated by git merge, hence has access to the branch name from git merge's arguments, but this does not seem to be stored on disk.
A:
but there's no guarantee that the user hasn't edited it
imvho you should use the merge commit summary line, after the user has had a chance to edit it.
I've edited subject lines for good reason. All branch names are repo-local. Sometimes you're pulling from a coworker, sometimes you realize you typo'd during branch creation, or you're publishing from a wip that turned out well, there's lots more ways to get there.
If you're worried about inbound commits not meeting your standards for one of your own repositories, vet the inbound commits in its pre-receive. No dvcs can be sure without doing that anyway.
#!/bin/sh
rc=0
existing=$(git for-each-ref --format='%(object)' refs/heads refs/tags);
validmergesubject='Merge (branch|tag) '\''[^ ]*'\'' (of|into) .*'
while read old new ref; do
while read commit Subject; do
if [[ ! $Subject =~ $validmergesubject ]]; then
echo Merge $commit in $ref history has invalid summary line \"$Subject\"
rc=1;
fi;
done >&2 <<EOD
$(git log --merges --pretty='%H %s' $new --not $existing)
EOD
done
exit $rc | unknown | |
d13350 | train | The ids dictionary is created for each rule in the kv file, and those ids are placed in the dictionary for the root of that rule. So the _input_parameters id is only in the ids of the Peenomat instance.
So, I think you need to change:
ip = App.get_running_app().root.ids._input_parameters
to something like:
ip = App.get_running_app().root.get_screen('peenomat').ids._input_parameters
The root of your App is the ScreenManager, and get_screen('peenomat') gets the Peenomat Screen instance. | unknown | |
d13351 | train | It looks to me like CharacterSkillLink itself is your through model in this case... it generically joins a content type to a SkillAbility
If you think about it, it also makes sense that if you're doing a bulk_create the objects that you pass in must be of the same model you're doing a bulk_create on.
So I think you want something like this:
def save(self, *args, **kwargs):
initialise_skill_links = not self.pk
super(NWODCharacter, self).save(*args, **kwargs)
if initialise_skill_links:
CharacterSkillLink.objects.bulk_create([
CharacterSkillLink(
skill=SkillAbility.objects.get_or_create(skill=skill)[0],
content_object=self
)
for skill in SkillAbility.Skills
])
Note you had too many pairs of [] inside your bulk_create.
Also I think you should use SkillAbility.objects.get_or_create()... for a foreign key you need the related object to exist. Just doing SkillAbility() won't fetch it from the db if it already exists and won't save it to the db if it doesn't. | unknown | |
d13352 | train | The most likely problem is that your numeric data is being interpreted as a string.
From the API.txt:
Note that to simplify the internal
logic in Flot both the x and y values
must be numbers (even if specifying
time series, see below for how to do
this). This is a common problem
because you might retrieve data from
the database and serialize them
directly to JSON without noticing the
wrong type. If you're getting
mysterious errors, double check that
you're inputting numbers and not
strings.
So either force your data to numeric types on the PHP side, or do it in javascript, perhaps with .Number(some_data) | unknown | |
d13353 | train | This is quite tricky... almost impossible to make it really unbreakable. Any reasonnably motivated person will be able to pierce through it. You'll only make it a little harder to do. In any case, you definitely can't store any secret key in the bundle itself. You'd need to securely obtain the decryption key over a secure channel from a server and use it as needed. Even then, someone doing jailbreak would probably be able to run GDB over your running program and extract the secret key in RAM + the secret key would be shared amongst all users of your app... You're essentially trying to implement a DRM scheme, which is inherently flawed by design... Unless you need offline access, you might want to pull the data as needed from a secure erver... at least you "could" throttle information leakage...
A: Here's a few thoughts.
If the book text is all alphanumeric data, then don't save the data as ASCII - save them in your own binary encoded format (for instance use 5 bits instead of 8 and pack into words). That gives you a bit of compression, slight obfuscation and a very cheap (in clock cycles) decompression. You would have a data format that is quick to access on the fly and will keep the casual curious hacker out of the text. Clock cycles would be my main concern and security second.
Another idea is store the decrypt key for a typical Blowfish encryption in obfuscated format in the app. Split into two or three constants that require some odd operation to restore for instance. But of course, now the overhead of Blowfish or whatever will be your concern.
Since you will not be able to implement perfect security (perfection is extremely expensive), the IP owners will have to use traditional copyright and trade secret techniques to fully protect their property. You've made it harder to hack, but it's still up to the lawyers to be diligent, just a book on the shelf in the reserved section of the library (no photocopies please!).
Cheers
A: I would keep the documents encrypted if I were you and just decrypt them as needed. One would easily be able to access the decrypted documents on a jailbroken device.
See the "Security Overview" document and the CryptoExercise sample code for encryption techniques
A: You probably won't like it, but the best way is to just not use HTML. Once you pass the decrypted HTML to UIWebView, it is very easy for a malicious user to steal it at that level, defeating any purpose your encryption algorithm had. A UIView subclass with custom drawing code and a custom encrypted backing format will be much more difficult to work around
A: From Mac OS X and iPhone OS Security Services:
You can use Keychain Services to
encrypt and store small amounts of
data (see Keychain Services Reference
and Keychain Services Programming
Guide). If you want to encrypt or
decrypt larger amounts of data in Mac
OS X, you can use the Common Security
Services Manager (CSSM) Cryptographic
Services Manager. This manager also
has functions to create and verify
digital signatures, generate
cryptographic keys, and create
cryptographic hashes. In iPhone OS,
the Certificate, Key, and Trust
Services API provides functions for
generating encryption keys, creating
and verifying digital signatures, and
encrypting blocks of data; see
Certificate, Key, and Trust Services
Reference.
It's always a choice between performance (encryption just doesn't come free) and security (security and everything else, really). But what else is new? If you keep the individual files small enough, maybe decryption doesn't slow you down much. Alternatively, you may consider predictive decryption such that you have certain files being decrypted in the background, say those linked from the currently viewed file, etc. I realize, however, that concurrancy on the iPhone may be pretty spotty (I don't know as I haven't dropped the cash for a license). You may also realize performance gains by only encrytping those files that really need it; does an index/table of contents or other often accessed file really need to be encrypted? Does that count as IP your client is worried about?
A: For compression I can recommend QuickLZ (fastest engine I saw, great compression ratio). | unknown | |
d13354 | train | I'd slightly readjust how the workflow runs and allow it to be easily run in parallel.
# Use variables to adjust models, makes it easier to change sizes
iter <- 60
iter_samps <- 1000
factors_df <- data.frame(f1 = sample(LETTERS[1:3], iter_samps, replace = T),
f2 = sample(LETTERS[4:5], iter_samps, replace = T))
# using a data.frame in a longer format to hold the data, allows easier splitting
data_df <- rep(list(factors_df), iter) %>%
bind_rows(.id = "id") %>%
mutate(numeric_y = rnorm(iter_samps * iter),
count_y = rpois(iter_samps * iter, 10),
dispersed_count_y = MASS::rnegbin(iter_samps * iter, 10, 2))
# creating function that determines residuals
model_residuals <- function(data) {
data$lm_resid <- lm(numeric_y ~ f1+f2, data = data)$residuals
data$glm_resid <- residuals(object = glm(count_y ~ f1+f2, data = data, family = "poisson"), type = 'pearson')
return(data)
}
# How to run the models not in parallel
data_df %>%
split(.$id) %>%
map(model_residuals) %>%
bind_rows()
To run the models in parallel you can use multidplyr to do all the annoying work
library("multidplyr")
test = data_df %>%
partition(id) %>%
cluster_library("tidyverse") %>%
cluster_library("MASS") %>%
cluster_assign_value("model_residuals", model_residuals) %>%
do(results = model_residuals(.)) %>%
collect() %>%
.$results %>%
bind_rows() | unknown | |
d13355 | train | Try This.
<?php
class query {
public function listfields (){
$fields = array();
$rowcnt =1;
$result = $this->mysqli->query("SELECT id, name FROM fields", MYSQLI_USE_RESULT);
while($row=$result->fetch_assoc()){
$this->fields[$row["id"]] = $row["name"];
//$this->fields[$rowcnt] = $row["name"]; // if you want different indexing.
$rowcnt++;
}
$result->free();
}
public function fields(){
return $this->fields;
}
}
$list = new query();
$list->listfields();
$field1 = $list->fields();
var_dump($field1);
?>
A: Instead of your function fields, you will need a property fields, which you can access.
Furthermore i suggest using getters and setters instead of a public property, im sure you will find out how.
This will return the Data in form array("1"=>"Value 1", "2"=>"value2").
class query {
public $fields;
public function fillfields ()
{
$result = $this->mysqli->query("SELECT id, name FROM fields", MYSQLI_USE_RESULT);
while($row=$result->fetch_assoc()){
$this->fields[] = $row["name"];
}
$result->free();
}
$list = new query;
$list->fillfields();
$field1 = $list->fields[1];
echo $field1; | unknown | |
d13356 | train | There are two errors in your query.
Error 1: highway=street
Where does this tag come from? street is not a valid value for the highway key. In fact, since you want to obtain all streets you have to omit the value completely and just query for highway.
Error 2: node()
A road is not a node but a way. So you have to query for way(area)[...] instead. This also requires a recurse-up step (>;) to retrieve all nodes of these ways.
Corrected query
[out:json]; area[name = "New York"]; (way(area)[highway]; ); (._;>;); out; | unknown | |
d13357 | train | Custom attributes are not available in Cognito access token. Currently it is not possible to inject additional claims in Access Token using Pre Token Generation Lambda Trigger as well. PreToken Generation Lambda Trigger allows you to customize identity token(Id Token) claims only.
A: You can use ID token to get the token with custom attributes.
Access tokens are not intended to carry information about the user. They simply allow access to certain defined server resources.
You can pass an ID Token around different components of your client, and these components can use the ID Token to confirm that the user is authenticated and also to retrieve information about them.
How to retrieve Id token using amazon cognito identity js
cognitoUser.authenticateUser(authenticationDetails,{
onSuccess: function(result) {
var accessToken = result.getIdToken().getJwtToken();
console.log('accessToken is: ' + accessToken);
},
onFailure: function(err) {
alert(err.message || JSON.stringify(err));
},
});
A: I have the same problem when I want to create several microservice. There isn't a way I can customize an access token, but only an identity token. However, I use client credentials in the machine-to-machine which needs access token. So, in no way I can customize my token. At last, I decide to add such info(like user type) in the event header. It's not a very secure way compared to customize a token, but there isn't any other easy way to do it right now. Otherwise, I have to rewrite the authorizer in Cognito. Like rewriting a customize authorizer and it's very painful.
A: I have the same issue with Cognito; exist other tools like "PingFederate"Auth-server of Ping identity and Auth0 Auth-server; I know that the requirement isn't part of the standard, but these applications were my alternatives to fix this issue
A: The responses suggesting to use the ID Token for authorization in your backend systems are bad security practice. ID Tokens are for determining that the user is in fact logged in and the identity of that user. This is something that should be performed in your frontend. Access Tokens on the other hand are for determining that a request (to your backend) is authorized. ID Tokens do not have the same security controls against spoofing that Access Tokens have (see this blog from Auth0: https://auth0.com/blog/id-token-access-token-what-is-the-difference/).
Instead, I recommend that your backend accept an Access Token as a Bearer token via the Authorization HTTP header. Your backend then calls the corresponding /userinfo endpoint (see: https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) on the authorization server that issued the Access Token, passing such said Access Token to that endpoint. This endpoint will return all of the ID Token information and claims, which you can then use to make authorization decisions in your code. | unknown | |
d13358 | train | 1) Passing in a temporary combo box model to selectWithKeyChar() that
doesn't include the accents. If this is possible, how do you pass in
the model to selectWithKeyChar()?
2) Overriding the selectWithKeyChar() method?
3) Making a custom method. In this case, how would you make it run
instead of the one that already exists in JComboBox.java?
All of this are possible, You can create a temp ComoBoxModel and you extends JComboBox to override selectWithKeyCharacter on your CustomComboBox. Here is the sample of a code:
CustomComboBox.java
public class CustomComboBox extends JComboBox<String>{
private static final long serialVersionUID = 1L;
public CustomComboBox(String[] items) {
super(items);
}
@Override
public boolean selectWithKeyChar(char keyChar) {
int index;
ComboBoxModel<String> tempModel = getModel(); // Put the model on temp variable for saving it for later use.
String[] itemsArr = new String[getModel().getSize()];
for(int i = 0; i < getModel().getSize(); i++) {
// This will normalizes the Strings
String normalize = Normalizer.normalize(getModel().getElementAt(i), Normalizer.Form.NFD);
normalize = normalize.replaceAll("\\p{M}", "");
itemsArr[i] = normalize;
}
if ( keySelectionManager == null )
keySelectionManager = createDefaultKeySelectionManager();
setModel(new DefaultComboBoxModel<String>(itemsArr)); // Set the Temporary items to be checked.
index = keySelectionManager.selectionForKey(keyChar,getModel());
setModel(tempModel); // Set back the original items.
System.out.println(index);
if ( index != -1 ) {
setSelectedIndex(index);
return true;
}
else
return false;
}
}
If you will test it, even if you enter a normal char 'a', it will select the element in JComboBox with accent or with non-accent.
TestMain.java
public class TestMain {
public static void main(String[] args) {
String[] lists = {"ápa","Zabra","Prime"};
CustomComboBox combo = new CustomComboBox(lists);
System.out.println(combo.selectWithKeyChar('A'));
JFrame frame = new JFrame("Sample");
frame.setSize(400, 400);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(null);
frame.setVisible(true);
combo.setSize(70, 30);
combo.setLocation(100, 100);
frame.add(combo);
}
} | unknown | |
d13359 | train | Autolayout and SizeClasses wouldn't target specific devices, so you will have to set the font sizes programatically. You can use check the size of your device using UIScreen.mainScreen().bounds.size.height and set the size of your font accordingly. This solution will clarify you more.
A: As you mentioned in your question you need to give separate font sizes for different devices.
First thing is we cant achieve it on storyboard.
You need to assign different font sizes Manually by using If conditions & checking devices.
For ex:
if ([[UIScreen mainScreen] bounds].size.height == 568) {
// Assign Font size for iPhone 5
}else if ([[UIScreen mainScreen] bounds].size.height == 667){
// Assign Font size for iPhone 6
}else if ([[UIScreen mainScreen] bounds].size.height == 736){
// Assign Font size for iPhone 6+
}else if ([[UIScreen mainScreen] bounds].size.height == 480){
// Assign Font size for iPhone 4s
}
Note:
*
*You can create a separate Font class & if you did it already than just need to put above validations in that class. | unknown | |
d13360 | train | I am pretty sure it doesn't work because users is undefined when your component renders for the first time.
Try to initialize that variable in the state doing something like this:
state = {
users: {}
}
and then use a fallback since id will also be undefined doing this:
<FormInputs
ncols={["col-md-5", "col-md-3", "col-md-4"]}
properties={[
{
defaultValue: this.state.users.id || "Fallback Value", // this will render "Fallback value" if users.id is undefined
}
/>
If this is not the case please share more information about your situation. | unknown | |
d13361 | train | shouldn't this:
var item = $('edit-submitted-file'+c+'-ajax-wrapper');
be
var item = $('#edit-submitted-file'+c+'-ajax-wrapper'); //if using id
or
var item = $('.edit-submitted-file'+c+'-ajax-wrapper'); //if using class
A: You need to prepend '#' to the selector for -ajax-wrapper because you are selecting it by ID.
The counter needs to start at 2 and hide addmore on 6. The return false (should actually be e.preventDefault() should be outside of that conditional.
addmore should be added after the last of the file input rows, not the first.
A: Just use the next() function and :visible selector
Example:
HTML
<label><span>File</span> <input type="file" /></label>
<label><span>File</span> <input type="file" /></label>
<label><span>File</span> <input type="file" /></label>
...
JQuery
$('label:gt(0)').hide(); // hide all except the first one
$('label:visible').click($(this).next('label').show()); // show the following
:visible is not necessary, as you can't click on hidden elements, but it's just an optimisation. | unknown | |
d13362 | train | In my machine, I can reproduce your case by using different alignments for pointers. Try this code:
mat_bench_fixture() {
matA = new double[n * n + 256];
matB = new double[n * n + 256];
matC = new double[n * n + 256];
// align pointers to 1024
matA = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matA) + 1023)&~1023);
matB = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matB) + 1023)&~1023);
matC = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matC) + 1023)&~1023);
// toggle this to toggle alignment offset of matB
// matB += 2;
}
If I toggle the commented line in this code, I got 34% difference on my machine.
Different alignment offsets cause different timings. You can play with offsetting the other 2 pointers too. Sometimes the difference is smaller, sometimes bigger, sometimes there's no change.
This must be caused by a cache issue: by having different last bits of the pointers, different collision patterns occur in the cache. And as your routine is memory intensive (all the data doesn't fit into L1), cache performance matters a lot. | unknown | |
d13363 | train | It looks like message['path'] is a bytes object rather than a string, and trying to apply strip() to a bytes object yields that rather cryptic error message. Instead, try message['path'].decode() to convert it to a string, then do your stripping and splitting.
See also Python 3.0 urllib.parse error "Type str doesn't support the buffer API" | unknown | |
d13364 | train | if you simply must access the internal methods another work around would be making the projects as Friend Assemblies like that:
//Lib Project
#pragma once
//define LibTest as friend assembly which will allow access to internal members
using namespace System;
using namespace System::Runtime::CompilerServices;
[assembly:InternalsVisibleTo("LibTest")];
public ref class Lib
{
public:
Lib(void);
public:
void Extract( std::string& data_ );
};
//LibTest Project
#pragma once
#using <Lib.dll> as_friend
ref class LibTest
{
public:
LibTest(void);
};
A: The problem is that std::string will compile as a internal (non public) type. This is actually a change in VS 2005+:
http://msdn.microsoft.com/en-us/library/ms177253(VS.80).aspx:
Native types are private by default outside the assembly
Native types now will not be visible outside the assembly by default. For more information on type visibility outside the assembly, see Type Visibility. This change was primarily driven by the needs of developers using other, case-insensitive languages, when referencing metadata authored in Visual C++.
You can confirm this using Ildasm or reflector, you will see that your extract method is compiled as:
public unsafe void Extract(basic_string<char,std::char_traits<char>,std::allocator<char> >* modopt(IsImplicitlyDereferenced) data_)
with basic_string being compiled as:
[StructLayout(LayoutKind.Sequential, Size=0x20), NativeCppClass, MiscellaneousBits(0x40), DebugInfoInPDB, UnsafeValueType]
internal struct basic_string<char,std::char_traits<char>,std::allocator<char> >
Note the internal.
Unfortunately you are then unable to call a such a method from a different assembly.
There is a workaround available in some cases: You can force the native type to be compiled as public using the make_public pragma.
e.g. if you have a method Extract2 such as:
void Extract2( std::exception& data_ );
you can force std::exception to be compiled as public by including this pragma statement beforehand:
#pragma make_public(std::exception)
this method is now callable across assemblies.
Unfortunately make_public does not work for templated types (std::string just being a typedef for basic_string<>)
I don't think there is anything you can do to make it work. I recommend using the managed type System::String^ instead in all your public API. This also ensures that your library is easily callable from other CLR languages such as c#
A: In addition to the solutions described above, one can subclass the templated type to obtain a non-templated type, and include its definition in both projects, thus overcoming some of the problems mentioned above. | unknown | |
d13365 | train | You can use ng-submit for this:
<form ng-submit="goToNextForm()">
<input type="text" ng-model="email">
<button type="submit" value="submit">
</form>
And in controller:
$scope.goToNextForm = function(){
//code to save data and move to next controller/form here
}
A: If you're not navigating to a different view, you probably dont need another controller. You can show and hide forms conditionally with ng-if. Ie. say first form is done, you've posted it to the database. You can do something like this
$scope.form = 1
<form id="form1" ng-if="form === 1">
//your form html
</form>
<form id="form2" ng-if="form === 2">
//your form html
</form>
then when form1 is submitted, you can do
$scope.form = 2
in your controller to hide the first form and render the second one
If you're set on the different forms having different controllers, you can do something like this
<div controller="mainCtrl">
<form controller="form1Ctrl" id="form1" ng-if="form === 1">
//your form html
</form>
<form controller="form2Ctrl" id="form2" ng-if="form === 2">
//your form html
</form>
</div>
You would set the form variable from the mainCtrl's scope | unknown | |
d13366 | train | The fourth argument of WSARecv should be a pointer to the number of bytes received. However, you are passing the address of a pointer to the length of your buffer.
If you were passing the pointer, and not a pointer to a pointer, then it would be weird but it should work fine (as it won't corrupt anything). As it is now, however, it is probably writing where it shouldn't.
In short: Check and fix the fourth parameter. | unknown | |
d13367 | train | You have multiple options. Let me explain the basic concept first. Generally every app on cloudControl has its own subdomain like APP_NAME.cloudcontrolled.com. Requests to those subdomains (or from a CNAME pointing to that subdomain) are forwarded by the routing tier to one or more of the containers available to serve requests. What runs inside each container is controlled by the Buildpack. Depending on the preferences of each language ecosystem (e.g. PHP vs Python) the runtime environment in the container differs. So for PHP, Apache is available while for Python it is not.
Option 1: The recommended way would be to have e.g. www.example.com poing to PYTHON_APP.cloudcontrolled.com and blog.example.com point to PHP_APP.cloudcontrolled.com.
Option 2: Alternatively if you have to use /blog instead of a blog. subdomain you can teach the Apache running inside the PHP App's containers to only serve requests for /blog and forward everything else to PYTHON_APP.cloudcontrolled.com.
Option 3: Soon you'd have a third option also, but this isn't available yet. We're currently working on enabling the Python buildpack to run Nginx inside the containers and use WSGI to communicate with the Python process. (Currently the Python process has to listen on the $PORT and serve HTTP directly) As soon as Nginx is available you could also configure it to forward /blog to PHP_APP.cloudcontrolled.com and serve everything else directly.
My recommendation would be to go with option 1, since that keeps both apps nicely decoupled. By permanently redirecting /blog in the Python app to blog.example.com you can make the migration painless. | unknown | |
d13368 | train | It seems you cannot do an update of multiple XML node values in a single UPDATE statement, as mentioned here: How to modify multiple nodes using SQL XQuery in MS SQL 2005
In your query above I think only the first instance found will be updated. | unknown | |
d13369 | train | Use this:
=DATE(LEFT(A1,4),MID(A1,5,2),MID(A1,7,2))+TIME(RIGHT(A1,2),0,0)
Then format the cells with a custom format of:
yyyy/mm/dd h:mm | unknown | |
d13370 | train | As far as I can see, no, it's not really possible. The development version has some methods for limiting foreign keys, but it doesn't seem to me that limiting based on the customer is possible, since it depends on separate foreign keys.
The best suggestion, if you're really bent on doing it in the admin form, would be to use Javascript to do it. You would still have to make AJAX calls to get lists of what printers customers had and what cartridges to show based on that, but it could be done. You would just specify the JS files to load with the Media class.
But I think that's more work than it's worth. The easiest way I would see to do it would be with Form Wizards. That way, you'd have a step to select the customer so on the next step you know what cartridges to show.
Hope that helps!
A: I've worked similar problems, and have come to the conclusion that in many cases like this, it's really better to write your own administration interface using forms than it is to try and shoehorn functionality into the admin which is not intended to be there.
As far as 3) goes, it depends on what your product base looks like. If you're likely to have customers ordering 50 identical widgets, you probably do want a quantity field. If customers are more likely to be ordering 2 widgets, one in red, one in blue, add each item separately to the manytomany field and group them in your order interface. | unknown | |
d13371 | train | Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
No, you usually call it partial application.
Does any variety of LISP implement this feature? Which ones?
Practically all Lisp allow you to partially apply a function, but usually you need to write a closure explicitly. For example in Common Lisp:
(defun add (x y)
(+ x y))
The utility function curry from alexandria can be used to create a closure:
USER> (alexandria:curry #'add 42)
#<CLOSURE (LAMBDA (&REST ALEXANDRIA.1.0.0::MORE) :IN CURRY) {1019FE178B}>
USER> (funcall * 3) ;; asterisk (*) is the previous value, the closure
45
The resulting closure is equivalent to the following one:
(lambda (y) (add 42 y))
Some functional languages like OCaml only allow functions to have a single parameter, but syntactically you can define functions of multiple parameters:
(fun x y -> x + y)
The above is equivalent to:
(function x -> (function y -> x + y))
See also What is the difference between currying and partial application?
Nb. in fact the q documentation refers to it as partial application:
Notationally, projection is a partial application in which some arguments are supplied and the others are omitted
A: I think another way doing this :
q)f:2+
q)g:{"result: ",string x}
q)'[g;f]3
"result: 5"
It is composite function, passing 3 to f, then the result from f will be passed to g.
I'm not sure if it is LISP, but it could achieve the same result. | unknown | |
d13372 | train | If the lists are sorted, then it's relatively straightforward if you do something like this:
List<Integer> intersection = new ArrayList<>();
int i = 0;
int j = 0;
while (i < list1.size() && j < list2.size()) {
int a = list1.get(i);
int b = list2.get(j);
if (a < b) {
i++;
} else if (b < a) {
j++;
} else { // a == b
intersection.add(a);
i++;
j++;
}
}
On each iteration of the loop, the quantity i + j increases by at least 1, and the loop is guaranteed to be done when i + j >= list1.size() + list2.size(), so the whole thing does at most O(list1.size() + list2.size()) comparisons.
A: Use an array of ints. Taking your first list, for each element, set the value at that index to 1. So if the first element is 3, put 1 in array[3]. Now, we know that 3 is present in first list. Putting 1 will help you distinguish from a 3 that is present in the earlier list versus a 3 which is repeated in current list.
*
*Iterate through all the other k-1 lists
*For every element, check the value in array for that index
*If the value is 0, set it to this list number
*If the value is a number less than this list number, this number is a duplicate and has already appeared in a previous list.
*If this number is equal to this list index it means this number already occurred in this list but not in previous lists, so not yet a duplicate.
*The numbers that you are getting as duplicates, add them to another list.
*Finish all iterations
Finally print the duplicates.
Original Wrong Answer
*
*Create a HashSet<int>
*Take all values from master and add to it - O(master list count)
*Now just iterate through first and second arrays and see if their elements are in that HashSet - O(each list count) | unknown | |
d13373 | train | I modified your SaveTo_TextFile method. I added two columns to my dvList [Column1] and [Column2]. I was able to save the decimal value I entered in [Column2] successfully.
I do not know how you formatted your DataGridView column but mine is only a DataGridViewTextBoxCell with no formatting.
If I used formatting, this is what I would set my numeric column's row cellstyle to:
dvList.Columns("Column2").DefaultCellStyle.Format = "N2"
SaveTo_TextFile method
Private Sub Saveto_TextFile(ByVal dvList As DataGridView, ByVal filename As String)
Dim numCols As Integer = dvList.ColumnCount - 1
Dim numRows As Integer = dvList.RowCount
Dim strDestinationFile As String = "" & filename & ".txt"
Dim tw As TextWriter = New StreamWriter(strDestinationFile)
For dvRow As Integer = 0 To numRows - 1
'checking if the checkbox is checked, then write to text file
If dvList.Rows(dvRow).Cells("Column1").Value = True Then
tw.WriteLine(dvList.Rows(dvRow).Cells("Column2").Value) 'Column2 is the name of the column ... You can also use an index here
Else
tw.WriteLine("Not Checked")
End If
'write the remaining rows in the text file
For dvCol As Integer = 1 To numCols
tw.WriteLine(dvList.Rows(dvRow).Cells(dvCol).Value)
If (dvCol <> numCols) Then
tw.WriteLine("???")
End If
Next
tw.WriteLine()
Next
tw.Close()
End Sub | unknown | |
d13374 | train | Have you tried this solution from the UI-Router documentation?
Instead of using the global $stateParams service object,
inject $uiRouterGlobals and use UIRouterGlobals.params
MyService.$inject = ['$uiRouterGlobals'];
function MyService($uiRouterGlobals) {
return {
paramValues: function () {
return $uiRouterGlobals.params;
}
}
}
Instead of using the per-transition $stateParams object,
inject the current Transition (as $transition$) and use Transition.params
MyController.$inject = ['$transition$'];
function MyController($transition$) {
var username = $transition$.params().username;
// .. do something with username
}
https://ui-router.github.io/ng1/docs/latest/modules/injectables.html#_stateparams
A: Pass data as the value of your attribute-type directive. Then retrieve it via the attrs argument in the link function. Then eval it against scope.
<input unique-name="{doohickeyId: doohickeyIdFromStateParams}" />
link: function (scope, elm, attrs) {
var data = scope.$eval(attrs.uniqueName));
console.log('My doohickey ID is: ', data.doohickeyId);
}
A: I found an answer offline, but I wanted to post it here for future reference. The approach we decided to take was to set up our own custom global state service to hold the state params. So in app/app.js we've got something like this in app.run():
$transitions.onFinish({}, function ($transition$) {
customStateService.setCurrentStateParams($transition$);
});
Then we can inject our customStateService wherever we need it, and call a getCurrentStateParams() function to retrieve whatever params were available in the most recent successful state transition, similar to how we were previously using $stateParams. | unknown | |
d13375 | train | Sticking to RC isn't very viable in the long term. Especially now, when Visual Studio 2015 RTM has been released.
Unless you have some really strong incentive to stay on Release Candidate, I would recommend to:
*
*Get RTM Express Edition for free, whichever suits you best. SSDT 2015 is compatible with Web and Windows Desktop;
*Get SSDT 2015 (again, for free) and install on top of that.
AFAIR, that should give you SQL Server database projects and everything else you might need. The only catch is that, as usual, it will be a somewhat longer route if you have ever had any CTP / RC versions installed on the same Windows instance before. Then again, it's nothing new. | unknown | |
d13376 | train | Try to provide date by calculating first and then use it in query like below
$date = date("m-d-Y", strtotime('-3 day'));
$conn = getConnected("oversizeBoard");
mysqli_query($conn, "DELETE FROM postedLoads WHERE date < '".$date."');
It might help you. If need any other solution or help, do ask here.
A: Use MySQL function TIMESTAMPDIFF(unit,datetime_expr1,datetime_expr2);
Function calculates difference between two dates and returns output based on the unit parameter passed .
Try this:
DELETE FROM postedLoads WHERE TIMESTAMPDIFF('DAY',date,now())<3;
For detailed info of function:http://www.w3resource.com/mysql/date-and-time-functions/mysql-timestampdiff-function.php | unknown | |
d13377 | train | std::thread stores copies of the arguments it is passed. Which as Massimiliano Janes pointed out, is evaluated in the context of the caller to a temporary. For all intents and purposes, it's better to consider it as a const object.
Since x is a non-const reference, it cannot bind to the argument being fed to it by the thread.
If you want x to refer to i, you need to use std::reference_wrapper.
#include <thread>
#include <string>
#include <functional>
static void foo(std::string , int & )
{
while(true);
}
int main() {
int i = 1;
auto thd = std::thread(foo, std::string("bar"), std::ref(i));
thd.join();
}
Live Example
The utility std::ref will create it on the fly.
A: std::thread constructor performs a decay_copy on its arguments before invoking the callable perfect-forwarding the result to it; in your foo, you're trying to bind a lvalue reference (int& x) to an rvalue reference (to the temporary), hence the error; either take an int, an int const& or an int&& instead ( or pass a reference wrapper ).
A: Following on from StoryTeller's answer, a lambda may offer a clearer way to express this:
I think there are a couple of scenarios:
If we really do want to pass a reference to i in our outer scope:
auto thd = std::thread([&i]
{
foo("bar", i);
});
And if foo taking a reference just happens to be an historical accident:
auto thd = std::thread([]() mutable
{
int i = 1;
foo("bar", i);
});
In the second form, we have localised the variable i and reduced the risk that it will be read or written to outside the thread (which would be UB). | unknown | |
d13378 | train | It is all automatically included.
You are perhaps thinking of import statements. The Activities you write in the same package will automatically be included. For other code, you can just press [Ctrl/Command][Shift][O] in Eclipse to auto-import.
A: Answer is yes, there is an include statement, but only works on xml layout.
below you can find one example of it. the real purpose of this include is to use it like a template
<include
android:layout_height="wrap_content"
layout="@layout/activity_header_template" />
and if your question is related to the java source code, there is also an import option but that is not like c++, or php methods where you write some piece of code and attach the include file where ever you want. if you want some thing like that it should be a library
For Instance,
import ActionBarSherlock Library into your project. Go to your project properties by right clicking on your project > Properties > Android > Add > Select ActionBarSherlockLib > Apply > OK.
and then inside your main activity class
import com.actionbarsherlock.app.SherlockActivity; // this is how you import the just a library
A:
is there any way to include codes in "activity"
If by "codes in activity" you mean "XML layout files", then, yes, there is an <include> tag that you can use.
If by "codes in activity" you mean Java code, then that is not possible, as Java does not support an include directive the way C/C++ do. | unknown | |
d13379 | train | This seems to work, as long as you can assume that all the frames in an AnimationDrawable are BitmapDrawables:
for(int i = 0; i < animation.getNumberOfFrames(); i++) {
Drawable frame = animation.getFrame(i);
if(frame instanceof BitmapDrawable) {
BitmapDrawable frameBitmap = (BitmapDrawable)frame;
frameBitmap.getPaint().setFilterBitmap(false);
}
}
A: If you know the texture ID of the texture that's being rendered, at any time after it's created and before it's rendered you should be able to do:
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
If this is the only texture being rendered then you can do that anywhere from your render thread and it will get picked up without having to explicitly glBindTexture(). | unknown | |
d13380 | train | For Swift 5 use the following:
yourDataSetName.fill = LinearGradientFill(....
or
yourDataSetName.fill = ColorFill(...
More details you can read here:
Charts
A: I believe this is the code that you are looking for. You should be able to use the same ChartFill class in Swift to set set1.fill.
let gradColors = [UIColor.cyanColor().CGColor, UIColor.clearColor.CGColor]
let colorLocations:[CGFloat] = [0.0, 1.0]
if let gradient = CGGradientCreateWithColors(CGColorSpaceCreateDeviceRGB(), gradColors, colorLocations) {
set1.fill = ChartFill(linearGradient: gradient, angle: 90.0)
}
A: This works perfectly (Swift 3.1 and ios-charts 3.0.1)
let gradientColors = [UIColor.cyan.cgColor, UIColor.clear.cgColor] as CFArray // Colors of the gradient
let colorLocations:[CGFloat] = [1.0, 0.0] // Positioning of the gradient
let gradient = CGGradient.init(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: gradientColors, locations: colorLocations) // Gradient Object
yourDataSetName.fill = Fill.fillWithLinearGradient(gradient!, angle: 90.0) // Set the Gradient
set.drawFilledEnabled = true // Draw the Gradient
Result
A: For swift4
let colorTop = UIColor(red: 255.0/255.0, green: 149.0/255.0, blue: 0.0/255.0, alpha: 1.0).cgColor
let colorBottom = UIColor(red: 255.0/255.0, green: 94.0/255.0, blue: 58.0/255.0, alpha: 1.0).cgColor
let gradientColors = [colorTop, colorBottom] as CFArray
let colorLocations:[CGFloat] = [0.0, 1.0]
let gradient = CGGradient.init(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: gradientColors, locations: colorLocations) // Gradient Object
yourDataSetName.fill = Fill.fillWithLinearGradient(gradient!, angle: 90.0)
A: Fill.fillWithLinearGradiant has been updated to LinearGradientFill
A: Make sure to use the latest version of ios-charts. If you use CocoaPods to install ios-charts change this:
pod 'Charts'
to this
pod 'Charts', '~> 2.2' | unknown | |
d13381 | train | You probably want something like django-easy-select2 [readthedocs.io]. You can install this (in your local environment) with:
pip3 install django-easy-select2
Next you add 'easy_select2' to the INSTALLED_APPS setting [Django-doc]:
# settings.py
# …
INSTALLED_APPS = [
# …,
'easy_select2',
# …,
]
# …
Now you can make use of the Select2 widget in your Form (or ModelForm):
from easy_select2 import Select2
class MyModelForm(forms.ModelForm):
# …
class Meta:
model = MyModel
widgets = {
'my_field': Select2
} | unknown | |
d13382 | train | The easiest way is to use a map as the value of your outer map, like the following:
Map<String, Map<String, String>> nestedMap = new HashMap<> ();
Map<String, String> fooInnerMap = new HashMap<> (), barInnerMap = new HashMap<> ();
nestedMap.put ("foo", fooInnerMap);
nestedMap.put ("bar", barInnerMap);
However, this is not really convenient to use. If you want better answers, please specify what you want and what you tried.
Your data structure, for example, looks like JSON. If you need your Map to save or exchange data, you could use a JSON library. | unknown | |
d13383 | train | This pull request includes a patch for the problem; the code is updated in git but is apparently not up to date even in the latest available gem. It's a one-line edit to bin/deltacloudd
https://github.com/apache/deltacloud/pull/3 | unknown | |
d13384 | train | No. Swift is a compiled language, and the runtime doesn't include the compiler. The iOS SDK doesn't provide a way to evaluate run-time Swift code.
You can execute JavaScript using JavaScriptCore, and JavaScriptCore makes it pretty easy to expose Swift objects and functions to the script. Maybe that will help you. | unknown | |
d13385 | train | You really shouldn't do things like this:
public boolean isUnlockValid(Offer offer) {
return ((offer.unlockExpirationDate == null) || (System
.currentTimeMillis() < offer.unlockExpirationDate.getTime()));
}
Create a class instance instead, which captures System.currentTimeMillis() and uses it. This way your filter will stay stable over time.
Consider something like this
class UnlockValidPredicate implements Predicate<Offer> {
public UnlockValidPredicate() {
this(System.currentTimeMillis());
}
public UnlockValidPredicate(long millis) {
this.millis = millis;
}
@Overrride public boolean apply(Offer offer) {
return offer.unlockExpirationDate == null
|| millis < offer.unlockExpirationDate.getTime();
}
private final long millis;
}
Also consider getting rid of nulls. Setting unlockExpirationDate to new Date(Long.MAX_VALUE) is good enough for "never expire", isn't it?
That's not it. The current time was Sep 7th. The unlockExpirationDate was Aug 30th. It's not a matter of days between the filtering and the debugging i did. what else can it be?
Get rid of Date, it's stupid mutable class. Most probably, you changed it somehow. Things like
MyClass(Date date) {
this.date = date;
}
and
Date getDate() {
return date;
}
are a recipe for disaster. The best solution is to use an immutable class (available in Java 8 or JodaTime). The second best is to use long millis. The last is to clone the Date everywhere.
A: What seems most likely is that the predicate was true when you did the filtering, and then the predicate became false later -- which seems perfectly possible when you're using System.currentTimeMillis(). | unknown | |
d13386 | train | Passing formEditUsers() into the ajax success callback seems to work fine. | unknown | |
d13387 | train | The ionic storage has moved to '@ionic/storage'
as of rc.0
so
import { Storage } from '@ionic/storage';
You can't specify whether its localstorage/sql or anything. But it uses each in order until its able to use one.
Just create an instance and use it
new Storage().set("key","value");
A: Old Question... but it just came up in my Google Results:
import { Storage } from '@ionic/storage';
Current as of RC4 | unknown | |
d13388 | train | I've checked your code using Xcode 7, which may not be ideal for resolving this issue because I had to covert your code to Swift 2.0, but here was what I found out.
ISSUE
*
*First time opening the app, this block:
if currentUser() != nil {
initialViewController = pageController
}
else {
initialViewController = storyboard.instantiateViewControllerWithIdentifier("LoginViewController") as UIViewController
}
self.window?.rootViewController = initialViewController
Will initialize LoginViewController and make it the current window's rootViewController.
At this point there is no pageController initialized
*When user taps on the button to go to the Profile screen, this method will be called
func goToProfile(button: UIBarButtonItem) {
pageController.goToPreviousVC()
}
At this point, pageController is initialized, and off course, there is NOTHING in the viewControllers array. Let's see what happen in the goToPreviousVC method:
Original method looks like this:
let nextVC = pageViewController(self, viewControllerAfterViewController: viewControllers[0] as UIViewController)!
setViewControllers([nextVC], direction: UIPageViewControllerNavigationDirection.Forward, animated: true, completion: nil)
One thing you can see obviously is: calling viewControllers[0] could give you a crash because viewControllers is an empty array.
If you use Swift 2.0, it doesn't even let you compile your code :)
SOLUTION
Let's go directly to the solution: Ensure that the pageController is available before trying to call it's viewControllers.
I blindly tried fixing you code in Swift 2.0 and found out that this method would work for you:
BEFORE: In LoginViewController.swift line 63
let vc = UIStoryboard(name: "Main", bundle: nil).instantiateViewControllerWithIdentifier("CardsNavController") as? UIViewController
self.presentViewController(vc!, animated: true, completion: nil)
AFTER: Let's fix it like this
let navc = UIStoryboard(name: "Main", bundle: nil).instantiateViewControllerWithIdentifier("CardsNavController") as! UINavigationController
if let viewControllers = pageController.viewControllers where viewControllers.count == 0 {
pageController.setViewControllers([navc.viewControllers[0]], direction: .Forward, animated: false, completion: nil)
}
self.presentViewController(pageController, animated: true, completion: nil)
It's working well here and probably I don't need to show you how the screen transition should look like :)
In case you would like to have the fixed source code as well, please find it HERE. Basically I converted your code to Swift 2.0 and ignored unnecessary parts like Facebook authentication for faster investigation.
Good luck & Happy coding! | unknown | |
d13389 | train | *
*length is a function and it should be length()
*You
don't need to deference the pointer if you use ->
*The result
returned from length() is size_t
Here is what I would use:
int length = static_cast<int>((*ifxPtr).length());
A: foo-> is shorthand for (*foo).. Don't double up operators, also it should be length():
for (int i = 0; i < ifxPtr->length(); i++)
Also, be careful with that design, possibility of running into Undefined Behaviour with a simple mistake is big.
A: if you use and *before it means using the value which is pointed by the pointer.
string s,*p;
p=&s;
here *p.lenght() is same as s.length if you want to access through pointer you have to use it like this p->length(). | unknown | |
d13390 | train | Exceptions are the only thing that you haven't understood IMHO: exceptions are meant to be out of your control, are meant to be caught be dealt with from outside the scope they are thrown in. The try block has a specific limit: it should contain related actions. For example take a database try catch block:
$array = array();
try {
// connect throws exception on fail
// query throws exception on fail
// fetch results into $array
} catch (...) {
$array[0]['default'] = 'me';
$array[0]['default2'] = ...;
...
}
as you can see I put every database related function inside the try block. If the connection fails the query and the fetching is not performed because they would have no sense without a connection. If the querying fails the fetching is skipped because there would be no sense in fetching no results. And if anything goes wrong, I have an empty $array to deal with: so I can, for example, populate it with default data.
Using exceptions like:
$array = array();
try {
if (!file_exists('file.php')) throw new Exception('file does not exists');
include('file.php');
} catch (Exception $e) {
trigger_error($e->getMessage());
}
makes no sense. It just a longer version of:
if (!file_exists('file.php')) trigger_error('file does not exists');
include('file.php'); | unknown | |
d13391 | train | If you don't know whether the array is sorted, you could use code like this to find the value in the array that is the closest to the passed in value (higher or lower):
var list = [2, 5, 9, 12, 15, 19, 22, 25, 29, 32, 35, 39, 42];
function findClosestValue(n, list) {
var delta, index, test;
for (var i = 0, len = list.length; i < len; i++) {
test = Math.abs(list[i] - n);
if ((delta === undefined) || (test < delta)) {
delta = test;
index = i;
}
}
return(list[index]);
}
If you want the closest number without going over and the array is sorted, you can use this code:
var list = [2, 5, 9, 12, 15, 19, 22, 25, 29, 32, 35, 39, 42];
function findClosestValue(n, list) {
var delta, index;
for (var i = 0, len = list.length; i < len; i++) {
delta = n - list[i];
if (delta < 0) {
return(index ? list[index] : undefined);
}
index = i;
}
return(list[index]);
}
A: Here is a jsFiddle with a solution that works for an infinite series of the combination "+3+4+3+3+4+3+3+4+3+3+4....." which seems to be your series. http://jsfiddle.net/9Gu9P/1/ Hope this helps!
UPDATE:
After looking at your answer I noticed you say you want the closes number in the sequence but your examples all go to the next number in the sequence whether it is closest or not compared the the previous number so here is another jsfiddle that works with that in mind so you can choose which one you want :).
http://jsfiddle.net/9Gu9P/2/
A: function nextInSequence(x){
//sequence=2, 5, 9, 12, 15, 19, 22, 25, 29, 32, 35, 39, 42
var i= x%10;
switch(i){
case 2:case 5:case 9: return x;
case 0:case 1: return x+ 2-i;
case 3:case 4: return x+5-i;
default: return x+9-i;
}
}
alert(nextInSequence(10)) | unknown | |
d13392 | train | why don't u add KASlideShow to a header view in ur UITableViewController ?
First create that headerView and add to it 1 more view and subclass it of KASlideShow
then connect your IBOutlet slideshow with this view
then add KASlideShow settings to ViewDidLoad
your code should look like the following code
#import "SlideShowTableViewController.h"
#import "KASlideShow.h"
@interface SlideShowTableViewController ()
@property (strong,nonatomic) IBOutlet KASlideShow * slideShow;
@end
@implementation SlideShowTableViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self.slideShow setDelay:5]; // Delay between transitions
[self.slideShow setTransitionDuration:5]; // Transition duration
[self.slideShow setTransitionType:KASlideShowTransitionFade]; // Choose a transition type (fade or slide)
[self.slideShow setImagesContentMode:UIViewContentModeScaleAspectFill]; // Choose a content mode for images to display
[self.slideShow addImagesFromResources:@[@"adidas.jpg",@"adidas_neo.jpeg"]]; // Add images from resources
[self.slideShow addGesture:KASlideShowGestureTap]; // Gesture to go previous/next directly on the image
[self.slideShow start];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#pragma mark - Table view data source
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView {
return 0; // <<-- set your sections count
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
return 0; // <<-- set your rows count in section
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"reuseIdentifier" forIndexPath:indexPath];
// Configure the cell...
return cell;
}
and your Interface Builder should look like | unknown | |
d13393 | train | With reference to zf2 docs - filesystem cache there is a boolean config param file_locking which does Lock files on writing. | unknown | |
d13394 | train | With --ntasks=4, srun will launch 4 identical Perl processes. What you want is --ntasks=1 and --cpu-per-task=4 so that Slurm allocates four cores on one node for your job. | unknown | |
d13395 | train | A better flow to use is probably the Custom Authentication Flow. It is not supported in the AWSAuthUIViewController, but you may use the AWSCognitoIdentityUserPool to perform the flow. The custom challenge could be leveraged to pass information back to the client before ending the flow. | unknown | |
d13396 | train | Since your file is located into a constants directory, you should probably use some .env file.
Here is a guide on how to achieve this in Nuxt: https://stackoverflow.com/a/67705541/8816585
If you really want to have access to it into a non .vue file, you can import it as usual with something like this
/constants/url.js
import store from '~/store/index'
export const test = () => {
// the line below depends of your store of course
return store.modules['@me'].state.email
}
PS: getters, dispatch and everything alike is available here.
Then call it in a page or .vue component like this
<script>
import { test } from '~/constants/url'
export default {
mounted() {
console.log('call the store here', test())
},
}
</script>
As for the lifecyle question, since the url.js file is not in a .vue file but a regular JS one, it has no idea about any Vue/Nuxt lifecycles. | unknown | |
d13397 | train | no, because those functions are inline, they are inlined at compiletime
and a Class or KClass is using reflection at runtime
there are some tricks that you can do.. like with the companion class, but that does nto need the KClass<T> at all.. anything else that provides a generic argument of T would work just as well for the reified type info
PS: reflection also cannot help you reliably because inline functions do not really exist at runtime, as explained by their modifier inline
A: Unless I am missing something, everything you can do with T in a function with reified T can be translated to a use of KClass: e.g. x is T becomes clazz.isInstance(x), x as T becomes clazz.cast(x), calls of other functions with reified type parameters are translated recursively, etc. Since the function has to be inline, all APIs it uses are visible at the call site so the translation can be made there.
But there's no automatic way to do this translation, as far as I know. | unknown | |
d13398 | train | To specify which events you want your bot to receive, first think about which events your bot needs to operate. Then select the required intents and add them to your client constructor, as shown below.
All gateway intents, and the events belonging to each, are listed on the Discord API documentation. If you need your bot to receive messages (MESSAGE_CREATE - "messageCreate" in discord.js), you need the GUILD_MESSAGES intent. If you want your bot to post welcome messages for new members (GUILD_MEMBER_ADD - "guildMemberAdd" in discord.js), you need the GUILD_MEMBERS intent, and so on.
Example:
const client = new Discord.Client({
intents: [
Discord.Intents.FLAGS.GUILDS,
Discord.Intents.FLAGS.GUILD_MESSAGES
]
});
A: The issue is exactly what the error says it is; your client is missing intents. You need to specify what events and data your bot intends to work with (e.g. guild member presences, messages, etc).
i try many way on internet but it still not sold
I don't know what tutorials or guides you're looking at, but only discord.js v11 and under can work without intents. Discord.js v12 and the latest version (v13) require intents to be specified. What intents you need to specify depends on what you want your bot to do. Does your bot need to detect messages and respond to them? Then enable the GUILD_MESSAGES intent. If your bot does not need to, for example, track guild member presences, you do not need to enable a GUILD_PRESENCES intent.
Before continuing, I would highly suggest you check out the official discord.js guide on how to create a bot on the latest version, which should have been the first place you looked for this information.
Here is a simple way to solve your issue based on the code on that guide, if you are using discord.js v13:
// Require the necessary discord.js classes
const { Client, Intents } = require('discord.js');
// Create a new client instance
const client = new Client({ intents: [Intents.FLAGS.GUILDS, Intents.FLAGS.GUILD_MESSAGES] });
Here is another way of doing it, if you are on discord.js v12 (this may also work in v13):
// Require the necessary discord.js classes
const { Client } = require('discord.js');
// Create a new client instance
const client = new Client({ intents: ["GUILDS", "GUILD_MESSAGES"], ws: {intents: ["GUILDS", "GUILD_MESSAGES"]} });
Note that the intents I specified in the above examples may not be enough for you, depending on what your bot is supposed to do in the future. But I believe those examples will be enough to get your current code working without the error you are experiencing.
For a full list of intents, check the discord developer docs here. | unknown | |
d13399 | train | The std::future destructor will block if it’s a future from std::async and is the last reference to the shared state. I believe what you’re seeing here is
*
*the call to async returns a future, but
*that future is not being captured, so
*the destructor for that future fires, which
*blocks, causing the tasks to be done in serial.
Explicitly capturing the return value causes the two destructors to fire only at the end of the function, which leaves both tasks running until they’re done. | unknown | |
d13400 | train | Shortly, if I get your question right, than yes, your description is correct.
It's not very clear from question what "two memory locations, x and y".
Based on how you put them in use in description, I'd presume that it's something like 2 pointers.
int* x, y;
and "Start with x = x1, y = y1" means assigning addresses to that pointers, eg
x = (int*)(0x2000); // x = x1
y = (int*)(0x8000); // y = y1
Now to your question: "After 5. completes, from the point of view of the DMA device, x = ?"
So after step 3, x = x2, y = y1 in memory, x = x1, y == y1 in cache.
After step 4, x = x2, y = y1 in memory, x = x1, y = y2 in cache.
DMA access values of x/y pointers in memory, CPU access value of x/y pointers in cache. Because cache and memory are not in sync (cache is dirty) at that stage CPU and DMA would get different values.
And finally after step 5...
It depends. Cache controller operates by cache lines, an area of memory of some size, something like 32Bytes, 64Bytes, etc (could be bigger or smaller). So when CPU does clean/flush cache line containing some address it flushes a content of a whole cache line to memory. Whatever was in memory is overwritten.
There are basically 2 situations:
*
*x and y pointers are both in same cache line. That means values from cache would override memory and, you are correct, that x = x1, y == y2 would finish in memory and cache after that.
*x and y pointers are in different cache lines. That simple, only one variable would be affected, another cache line still is dirty. | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.