source
sequence | text
stringlengths 99
98.5k
|
---|---|
[
"stackoverflow",
"0017384983.txt"
] | Q:
In Android, what is the difference between getAction() and getActionMasked() in MotionEvent?
I am confused by the two methods in Android. It
seems that both methods tell you what kind of event it is,
i.e., whether it is a down or up event.
When will I use which?
public void onTouchEvent(MotionEvent e)
Don't quote the documentation please, because I read it, and I don't see any parameter I can supply to either of the methods to get something different.
public final int getAction ()
and
public final int getActionMasked()
A:
getAction() returns a pointer id and an event (i.e., up, down, move) information.
getActionMasked() returns just an event (i.e., up, down, move) information. Other info is masked out.
For example:
getAction() returns 0x0105.
getActionMasked() will return 0x0005, which is 0x0105 && ACTION_MASK.
The value of ACTION_MASK is 0xFF. It masks the following actions.
ACTION_DOWN 0, UP 1, MOVE 2
ACTION_POINTER_DOWN 5, UP 6
The value of ACTION_POINTER_ID_MASK is 0xFF00. It masked the pointer ID from following deprecated constants.
ACTION_POINTER_1_DOWN 0x0005
ACTION_POINTER_2_DOWN 0x0105
ACTION_POINTER_3_DOWN 0x0205
...
A:
Yes, they both return the action (up/down etc.), but getAction() may return the action with pointer information, in which case the events may be a little different. getActionMasked() will always return "simple" actions with the pointer information "masked out" (get it?). You would then call getPointerIndex() on the same event to get the index of the pointer. Note that you will most commonly see this on multi-touch devices with multiple points of contact (pointers). The pointer index is essentially a way of matching events to contact points so you can tell them apart.
|
[
"drupal.stackexchange",
"0000012863.txt"
] | Q:
Which Drupal core function might call SELECT * FROM on custom module table?
I have a Drupal 6 website which is hosted on a shared hosting service and which is used to communicate with a desktop software application. Over the years I extended the user profile with over 70 different profile fields. With over 45.000 users this resulted in a profile_values table containing over 1.200.000 rows and I was getting complaints about slow queries and I've read it might lead to performance issues. Switching host or upgrading Drupal 6 to Drupal 7 is not an option!
Since the profile data is only used when the software communicates with the website, which is done once per application start I did not really have a need for such a large profile table so I created a custom module which stores the profile data per user in a single row as a serialized array. This also has the benefit that I don't need to update my website every time a new profile value is added in the software.
But my MySQL slow query log is filling up again. The query that's logged is SELECT * FROM {my_table}. As I know this table is quite large I was careful not do call such a query myself and therefore I cannot find this query in my module or any of the modules that use my module. I always select a single column and pass a specific parameter to check.
As far as I can see there are only two functions that might have an influence on this. I implement hook_user() for insert and update operations and I call drupal_write_record() when saving the user data. But running through the code I fail to see where such a query might be called.
Can anyone explain where this query might originate from, within the Drupal system, or how I can extend MySQL (5.1) logging to tell me from which script this function might originate? (As it's only supposed to be called from a software application there's no equivalent web page a user can access.)
A:
If grep/ack-grep can't find a SELECT * (likely) then just add an $debug_query = 'SELECT * FROM {mytable}'; if (substr($query, 0, strlen($debug_query)) == $debug_query) error_log(print_r(debug_backtrace(), TRUE), 4) or match the whole query etc and then look at the Apache error log for the pouring in backtraces. If you dont want to trash the log, log into a file in /tmp.
|
[
"math.stackexchange",
"0001211107.txt"
] | Q:
Determining the size of an automorphism group for a given design
I'm trying to wrap my head around the idea of automorphisms, and I'm having a lot of issues.
One of the questions I've been given as an exercise is thus;
Let $\mathbb{V} = \{1, 2, 3, 4, 5, 6\}$ and $\mathbb{B} = \{ \{1, 2, 3\}, \{1, 5, 6\}, \{3, 4, 5\}, \{2, 4, 6\} \}$. Determine $|Aut(\mathbb{V}, \mathbb{B})|$.
With this, I'm a little confused. I've noted that, at least initially, the permutation shifting the blocks is $(1,3,5)(2,4,6)$. If I fix the element $1$, then the permutations $(2,5)(3,6)$ and $(2,6)(3,5)$ also map the blocks to themselves. If I try to fix two elements, I find that I can't find any such automorphism. But, I'm not sure where to go from here.
I'm under the impression that I have to use the Orbit-Stabilizer Theorem, but again, my implementation of this is a bit dodgy. If anyone could point me towards some information or areas in which I could read about this topic of group theory within the context of designs, that'd be much appreciated.
A:
You could show that there are only four automorphisms fixing $1$, namely, $(2,5)(4,6)$, $(2,6)(3,5)$, $(2,3)(5,6)$ and the identity. These together with $(1,3,5)(2,4,6)$ generate a transitive group (i.e. there is a single orbit), so by the Orbit-Stabilizer Theorem, the full automorphism group $G$ has order $6 \times 4 = 24$.
It is not clear what exactly they mean by "determine". The three automorphisms you have found generate $G$, so you could say that you have determined it. It would be tedious if they expected you to write down all $24$ permutations.
|
[
"stackoverflow",
"0009452353.txt"
] | Q:
ANDROID : Unable to set action in CONNECTIVITY_CHANGE in my service
I have a service where I want to register for the the Broadcast action CONNECTIVITY_CHANGE (android.net.conn.CONNECTIVITY_CHANGE) to update my online status. I am doing it in the following way
Service OnCreate Method
final IntentFilter theFilter = new IntentFilter();
theFilter.addAction(android.net.conn.CONNECTIVITY_CHANGE);
this.ConnStateReceiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// Do whatever you need it to do when it receives the broadcast
// Example show a Toast message...
}
};
But while compilation it does not recognize the part android.net.conn in my android.net.conn.CONNECTIVITY_CHANGE. Its is showing some package problem. Please help me in fixing this. As I am registering for broadcast receiver dynamically, I think no need to mention it in manifest file. Please suggest me. I am stuck up here.
A:
Try using:
theFilter.addAction(android.net.ConnectivityManager.CONNECTIVITY_ACTION);
|
[
"stackoverflow",
"0060873479.txt"
] | Q:
How do I work with table variables with multiple rows in SAP HANA DB?
In SQL sometimes it is easier and faster to use table variables.
I know I can't use insert to table var in HANA DB, but what would be the best practice to do something similar?
I tried using SELECT to populate the variable but I can't insert multiply rows.
Do I have to use temporary table instead?
I would like to have a table with some values I create, like the below example I use for SQL, such a way I can use it later in the query:
Declare @temp table(Group Int, Desc nvarchar(100))
insert into @temp (Group , Desc )
Values (1,'Desc 1'), (2,'Desc2 2'), (3,'Desc 3'), (4,'Desc 4'), (5,'Desc 5')
In HANA, I am able to create the variable, but not able to populate it with multiple rows :(
Is there any best way to do so?
Thank you so much.
A:
The "UNION"-approach is one option to add records to the data that a table variable is pointing to.
Much better than this is to either use arrays to add, remove, and modify data and finally turn the arrays into table variables via the UNNEST function. This is an option that has been available for many years, even with HANA 1.
Alternatively, SAP HANA 2 (starting with SPS 03, I believe), offers additional SQLScript commands, to directly INSERT, UPDATE, and DELETE on table variables. The documentation covers this in "Modifying the Content of Table Variables".
Note, that this feature comes with a slightly different syntax for the DML commands.
As of SAP HANA 2 SPS 04, there is yet another syntax option provided for this:
"SQL DML Statements on Table Variables".
This one, finally, looks like "normal" SQL DML against table variables.
Given these options, the "union"-approach is the last option you should use in your coding.
|
[
"stackoverflow",
"0020199910.txt"
] | Q:
Joining two tables in SQL Server
I have two tables. One called Employee and the other called Departments
These are the two tables:
CREATE TABLE Departmenr (
department_code NCHAR(4),
department_name NVARCHAR(15),
city NVARCHAR(20),
budget MONEY
)
CREATE TABLE Employee (
employee_id NCHAR(6),
name NVARCHAR(20),
position NVARCHAR(20),
salary MONEY,
dcode NCHAR(3),
)
I have to write a statement that lists the name of each employee and name of the department they work in, for all employees with who have a salary over £20,000.
This means I have to join the Employee and Department tables to get an output.
I thought it might be something like this:
SELECT Emplyee.name, Department.department_name
FROM Employee
FULL OUTER JOIN Department
ON Employee.salary > 20000;
but it has errors.
How do I do this?
A:
Assuming dcode is a foreign key for table department you can do:
SELECT e.NAME,d.department_name
FROM Employee e
INNER JOIN Department d ON e.dcode = d.department_code
WHERE e.salary > 20000;
|
[
"stackoverflow",
"0001642437.txt"
] | Q:
call time reference issue with php
function get_arr($arr)
{
unset($arr[0]);
}
$arr1 = array(1,2);
$arr2 = array(1,2);
get_arr(&$arr1);
get_arr($arr2);
echo count($arr1);
echo count($arr2);
I got :
Warning: Call-time pass-by-reference
has been deprecated; If you would like
to pass it by reference, modify the
declaration of get_arr(). If you
would like to enable call-time
pass-by-reference, you can set
allow_call_time_pass_reference to true
in your INI file
But the output is:
12
Which means the call time reference takes effect.
Why the two places output contradictary messages?
A:
Deprecated does not mean non-functional, it's just not recommended.
|
[
"stackoverflow",
"0050085902.txt"
] | Q:
Unique_Ptrs Syntax {}
I am writing an Entity Component System based off a tutorial for making a game that I am following.
Currently, the function to add an entity of class "Entity" to an "entities" vector goes like:
Entity& addEntity() {
Entity* e = new Entity();
std::unique_ptr<Entity> uPtr{ e };
entities.emplace_back(std::move(uPtr));
return *e;
}
The code is working properly as per the tutorial. However, I am unsure about the actual syntax of Unique_ptr in line:
std::unique_ptr<Entity> uPtr{ e };
What is actually happening inside the {} braces? As I understand it, I'm assigning my uPtr unique pointer to the value of the pointer e? I would really appreciate an explanation regarding the unique_ptr syntax, especially with the curly braces.
Thanks.
A:
The statement creates an instance of unique_ptr using the single-argument constructor. In this case, it is equivalent to the older style std::unique_ptr<Entity> uPtr(e). This reflects the new uniform braced initialization syntax available since C++11.
More details: https://google.github.io/styleguide/cppguide.html#Braced_Initializer_List
|
[
"stackoverflow",
"0050985511.txt"
] | Q:
SQL: select different columns to be returned from a stored procedure
Let's say I have a table mytable with several columns (this is not entirely true, see Edit):
mytable
id data description created_at viewed_times
1 10 'help wanted' '20180101 04:23' 45
2 20 'customer registered' '20180504 03:12' 1
...
I created a stored procedure that returns data from this table. The problem is that sometimes I need it to return only id and data columns, and sometimes I need additional information, like description, created_at etc.
My idea is to create three dummy variables @display_description, @display_created_date, @display_viewed_times. When a dummy variable equals 1, I display corresponding column. For example, for command
exec my_procedure @display_description = 1
I expect output
id data description
1 10 'help wanted'
2 20 'customer registered'
...
How I implement it in the procedure is:
if @display_description = 1
select id, data, description from mytable
else
select id, data from mytable
The problem is, if I want to have 3 switches (one for each column), I have to write 8 conditions in my if statement. (My select statement is a complex one. In addition, some of the columns like viewed_times have to be calculated, if I want them displayed. Therefore, writing many if statements makes the query very clumsy, and I want to avoid that)
Is there an easy way to select columns based on the switches? And if no, what approach would you recommend to return different number of columns from a stored procedure?
Edit: Sorry, I sad that the table mytable already exists with all columns. That is not true, some of the columns have to be calculated before displayed. For example, column viewed_times doesn't exist. To display it, I'll need to do the following:
if @display_description = 1
~ several selects to calculate the column viewed_times ~
select id, data, description from mytable join table in which I just calculated the viewed_times column
else
select id, data from mytable
Calculating these columns is time consuming, and I would like to do that only if I need those columns displayed.
That's why dynamic SQL will probably not work
Update: I accepted the answer with Dynamic SQL, but what I did is the following:
if @display_viewed_times = 1
begin
~ calculate column viewed_times ~
update my_table
~ add column viewed_times to the table my_table ~
end
for each of the optional columns. Then I return the table using
select * from my_table
which gives me different number of columns depending on the swithces
A:
One method would be the use Dynamic SQL. This is pseudo-SQL, as there an absence of information here to achieve a full answer. It's also untested, as i don't have much/any data to really run this against.
--Your input parameters
DECLARE @description bit, @createdate bit, @viewedtimes bit /*...etc...*/;
--Now the SP
DECLARE @SQL nvarchar(MAX);
SET @SQL = N'SELECT ' +
STUFF(CONCAT(N','+ NCHAR(10) + N' ' + CASE WHEN @description = 1 THEN QUOTENAME(N'description') END,
N','+ NCHAR(10) + N' ' + CASE WHEN @createdate = 1 THEN QUOTENAME(N'created_at') END,
N','+ NCHAR(10) + N' ' + CASE WHEN @viewedtimes = 1 THEN QUOTENAME(N'viewed_times') END),1,9,N'') + NCHAR(10) +
N'FROM YourTable' + NCHAR(10) +
N'WHERE...;';
PRINT @SQL; --your best debugging friend.
EXEC sp_executesql @SQL /*N'@someParam int', @someParam = @inParam*/;
Ensure that you properly parametrise your query when you use Dyanmic SQL. Don't concatenate your string!!!
For example, the correct format would be N'WHERE yourColumn = @yourParam AND OtherColumn = @otherParam' and then provide the values of @yourParam and @otherParam in sp_executesql (as I have demonstrated within the comment).
Don't, however, do something like: N'WHERE yourColumn = ''' + @myParam + N''' AND otherColumn = ' + CONVERT(varchar(5),@secondParam). This would be open to SQL injection (which is not your friend).
|
[
"physics.stackexchange",
"0000436749.txt"
] | Q:
Transit of Venus and the computation of the Atronomical Unit
I have searched for the computation of the AU. The two best websites I found about it were
https://sunearthday.nasa.gov/2012/articles/ttt_75.php
How did Halley calculate the distance to the Sun by measuring the transit of Venus?
In the first one, the author declares that the angular speed of Venus is
.
However, I can't get this result neither by using the fact that the planet has a sideral period of 225 days which leads to
nor by using the fact that the synodic period is 584 days, which leads to
I really believe that the synodic period is the one that must be used for this computation (if I'm wrong, please tell me), since it results from the relative motion of Venus observed from Earth.
The second reference isn't clear about how to reach the relation
I would like to highlight that these references complete each other, since the second one explains how to determine the solar parallax, not explained in the first one, while the first one makes clear how to compute the angular sizes of the chords seen as the paths of Venus to each observer.
So, my question is: can anyone derive the computation of
and
?
Or, if easier, show a clear and detailed method for the computation of the AU from the time measurements of the transit of Venus?
A:
Yes, you have to use the synodic angular speed of Venus (which is simply the difference between the sidereal angular speeds of Venus and Earth), but this will give you the angular speed of Venus with respect to the Sun. What you need is the angular speed of Venus across the sky, as seen from Earth. Let's call the distance of Venus to the Sun $d$, in astronomical units. At transit, the distance between Venus and Earth is then $1-d$, and Venus is moving perpendicular to the Sun-Earth axis. The orbital velocity of Venus is
$$
v \sim d\omega = (1-d)\omega',
$$
where $\omega'$ is the angular speed of Venus across the sky, as seen from Earth. Using $\omega = 0.026^\circ/\text{h}$, and from your first article, $d = 0.69\ $AU, we get
$$
\omega' = \frac{d}{1-d}\omega\approx 0.058^\circ/\text{h}.
$$
There are additional effects (e.g. the rotation of the Earth which adds a parallax effect to the observations), but this the gist.
|
[
"stackoverflow",
"0000998036.txt"
] | Q:
Tool to export result set from SQL to Insert statements?
I would like to export a ad hoc Select query result sets from SQL Server to be exported directly as Insert Statements.
I would love to see a Save As option "Insert.." along with the other current available options (csv, txt) when you right-click in SSMS. I am not exporting from an existing physical table and I have no permissions to create new tables so the options to script physical tables are not an option for me.
I have to script either from temporary tables or from the result set in the query window.
Right now I can export to csv and then import that file into another table but that's time consuming for repetitive work.
The tool has to create proper Inserts and understand the data types when it creates values for NULL values.
A:
Personally, I would just write a select against the table and generate the inserts myself. Piece of cake.
For example:
SELECT 'insert into [pubs].[dbo].[authors](
[au_id],
[au_lname],
[au_fname],
[phone],
[address],
[city],
[state],
[zip],
[contract])
values( ''' +
[au_id] + ''', ''' +
[au_lname] + ''', ''' +
[au_fname] + ''', ''' +
[phone] + ''', ''' +
[address] + ''', ''' +
[city] + ''', ''' +
[state] + ''', ''' +
[zip] + ''', ' +
cast([contract] as nvarchar) + ');'
FROM [pubs].[dbo].[authors]
will produce
insert into [pubs].[dbo].[authors](
[au_id],
[au_lname],
[au_fname],
[phone],
[address],
[city],
[state],
[zip],
[contract])
values( '172-32-1176', 'White', 'Johnson', '408 496-7223', '10932 Bigge Rd.', 'Menlo Park', 'CA', '94025', 1);
insert into [pubs].[dbo].[authors](
[au_id],
[au_lname],
[au_fname],
[phone],
[address],
[city],
[state],
[zip],
[contract])
values( '213-46-8915', 'Green', 'Marjorie', '415 986-7020', '309 63rd St. #411', 'Oakland', 'CA', '94618', 1);
... etc ...
A couple pitfalls:
Don't forget to wrap your single
quotes
This assumes a clean database and
is not SQL Injection safe.
A:
take a look at the SSMS Tools Pack add in for SSMS which allows you to do just what you need.
A:
ATTENTION!!! AS IS. At the start of script you can see example how to use procedure. Of course you can make INSERT expresion if you need or add DataTypes for your required conversion.
The script's result is concated SELECT expresions with UNION ALL.
Be careful with collation of your data base. I didn't test other collation more than I need.
For long lenght fields I recomend use [Save Result As..] in Result grid instead Copy. Bacause you may get cutted script.
/*
USE AdventureWorks2012
GO
IF OBJECT_ID('tempdb..#PersonTbl') IS NOT NULL
DROP TABLE #PersonTbl;
GO
SELECT TOP (100)
BusinessEntityID
, PersonType
, NameStyle
, Title
, FirstName
, MiddleName
, LastName
, Suffix
, EmailPromotion
, CONVERT(NVARCHAR(MAX), AdditionalContactInfo) AS [AdditionalContactInfo]
, CONVERT(NVARCHAR(MAX), Demographics) AS [Demographics]
, rowguid
, ModifiedDate
INTO #PersonTbl
FROM Person.Person
EXEC dbo.p_GetTableAsSqlText
@table_name = N'#PersonTbl'
EXEC dbo.p_GetTableAsSqlText
@table_name = N'Person'
, @table_owner = N'Person'
*/
/*********************************************************************************************/
IF OBJECT_ID('dbo.p_GetTableAsSqlText', 'P') IS NOT NULL
DROP PROCEDURE dbo.p_GetTableAsSqlText
GO
CREATE PROCEDURE [dbo].[p_GetTableAsSqlText]
@table_name NVARCHAR(384) /*= 'Person'|'#Person'*/
, @database_name NVARCHAR(384) = NULL /*= 'AdventureWorks2012'*/
, @table_owner NVARCHAR(384) = NULL /*= 'Person'|'dbo'*/
/*WITH ENCRYPTION, RECOMPILE, EXECUTE AS CALLER|SELF|OWNER| 'user_name'*/
AS /*OLEKSANDR PAVLENKO p_GetTableAsSqlText ver.2016.10.11.1*/
DECLARE @isTemporaryTable BIT = 0
/*[DATABASE NAME]*/
IF (PATINDEX('#%', @table_name) <> 0)
BEGIN
SELECT @database_name = DB_NAME(2) /*2 - 'tempdb'*/
, @isTemporaryTable = 1
END
ELSE
SET @database_name = COALESCE(@database_name, DB_NAME())
/*END [DATABASE NAME]*/
/*[SCHEMA]*/
SET @table_owner = COALESCE(@table_owner, SCHEMA_NAME())
DECLARE @database_nameQuoted NVARCHAR(384) = QUOTENAME(@database_name, '')
DECLARE @table_ownerQuoted NVARCHAR(384) = QUOTENAME(@table_owner, '')
DECLARE @table_nameQuoted NVARCHAR(384) = QUOTENAME(@table_name, '')
DECLARE @full_table_name NVARCHAR(769)
/*384 + 1 + 384*/
DECLARE @table_id INT
SET @full_table_name = CONCAT(@database_nameQuoted, '.', @table_ownerQuoted, '.', @table_nameQuoted)
SET @table_id = OBJECT_ID(@full_table_name)
CREATE TABLE #ColumnTbl
(
ColumnId INT
, ColName sysname COLLATE DATABASE_DEFAULT
, TypeId TINYINT
, TypeName sysname COLLATE DATABASE_DEFAULT
, TypeMaxLength INT
)
DECLARE @dynSql NVARCHAR(MAX) = CONCAT('
INSERT INTO #ColumnTbl
SELECT ISC.ORDINAL_POSITION AS [ColumnId]
, ISC.COLUMN_NAME AS [ColName]
, T.system_type_id AS [TypeId]
, ISC.DATA_TYPE AS [TypeName]
, ISC.CHARACTER_MAXIMUM_LENGTH AS [TypeMaxLength]
FROM ', @database_name, '.INFORMATION_SCHEMA.COLUMNS AS [ISC]
INNER JOIN ', @database_name, '.sys.objects AS [O] ON ISC.TABLE_NAME = O.name
INNER JOIN ', @database_name, '.sys.types AS [T] ON ISC.DATA_TYPE = T.name
WHERE ISC.TABLE_CATALOG = "', @database_name, '"
AND ISC.TABLE_SCHEMA = "', @table_owner, '"
AND O.object_id = ', @table_id)
IF (@isTemporaryTable = 0)
SET @dynSql = CONCAT(@dynSql, '
AND ISC.TABLE_NAME = "', @table_name, '"
')
ELSE
SET @dynSql = CONCAT(@dynSql, '
AND ISC.TABLE_NAME LIKE "', @table_name, '%"
')
SET @dynSql = REPLACE(@dynSql, '"', '''')
EXEC(@dynSql)
DECLARE @columnNamesSeparated NVARCHAR(MAX) = SUBSTRING((SELECT ', [' + C.ColName + ']' AS [text()]
FROM #ColumnTbl AS [C]
ORDER BY C.ColumnId
FOR
XML PATH('')
), 2, 4000)
--SELECT @columnNamesSeparated
DECLARE @columnNamesSeparatedWithTypes NVARCHAR(MAX) = SUBSTRING((SELECT '+", " + "CONVERT(' + (CASE C.TypeId
WHEN 231 /*NVARCHAR*/
THEN CONCAT(C.TypeName, '(',
(CASE WHEN C.TypeMaxLength = -1 THEN 'MAX'
ELSE CONVERT(NVARCHAR(MAX), C.TypeMaxLength)
END), ')')
WHEN 239 /*NCHAR*/
THEN CONCAT(C.TypeName, '(', C.TypeMaxLength, ')')
/*WHEN -1 /*XML*/ THEN '(MAX)'*/
ELSE C.TypeName
END) + ', "+ COALESCE('
+ (CASE C.TypeId
WHEN 56 /*INT*/ THEN 'CONVERT(NVARCHAR(MAX), [' + C.ColName + '])'
WHEN 40 /*DATE*/
THEN 'N"""" + CONVERT(NVARCHAR(MAX), [' + C.ColName + '], 101) + """"'
WHEN 60 /*MONEY*/ THEN 'CONVERT(NVARCHAR(MAX), [' + C.ColName + '])'
WHEN 61 /*DATETIME*/
THEN '"""" + CONVERT(NVARCHAR(MAX), [' + C.ColName + '], 21) + """"'
WHEN 104 /*BIT*/ THEN 'CONVERT(NVARCHAR(MAX), [' + C.ColName + '])'
WHEN 106 /*DECIMAL*/ THEN 'CONVERT(NVARCHAR(MAX), [' + C.ColName + '])'
WHEN 127 /*BIGINT*/ THEN 'CONVERT(NVARCHAR(MAX), [' + C.ColName + '])'
WHEN 189 /*TIMESTAMP*/
THEN 'N"""" + CONVERT(NVARCHAR(MAX), SUBSTRING([' + C.ColName
+ '], 1, 8000), 1) + """"'
WHEN 241 /*XML*/
THEN '"""" + CONVERT(NVARCHAR(MAX), [' + C.ColName + ']) + """"'
ELSE 'N"""" + CONVERT(NVARCHAR(MAX), REPLACE([' + C.ColName
+ '], """", """""")) + """"'
END) + ' , "NULL") + ") AS [' + C.ColName + ']"' + CHAR(10) COLLATE DATABASE_DEFAULT AS [text()]
FROM #ColumnTbl AS [C]
ORDER BY C.ColumnId
FOR
XML PATH('')
), 9, 100000)
/*SELECT @columnNamesSeparated, @full_table_name*/
DECLARE @dynSqlText NVARCHAR(MAX) = CONCAT(N'
SELECT (CASE WHEN ROW_NUMBER() OVER (ORDER BY (SELECT 1 )) = 1 THEN "
/*INSERT INTO ', @full_table_name, '
(', @columnNamesSeparated, '
)*/', '
SELECT T.* /*INTO #ResultTbl*/
FROM (
"
ELSE "UNION ALL "
END) + "SELECT "+ ', @columnNamesSeparatedWithTypes, ' FROM ', @full_table_name)
SET @dynSqlText = CONCAT(@dynSqlText, ' UNION ALL SELECT ") AS [T]
/*SELECT *
FROM #ResultTbl*/
"')
SET @dynSqlText = REPLACE(@dynSqlText, '"', '''')
--SELECT @dynSqlText AS [XML_F52E2B61-18A1-11d1-B105-00805F49916B]
EXEC(@dynSqlText)
IF OBJECT_ID('tempdb..#ColumnTbl') IS NOT NULL
DROP TABLE #ColumnTbl;
GO
|
[
"codereview.stackexchange",
"0000066492.txt"
] | Q:
DRY up ivar assignment in Rails new and create controller actions
When creating a new model in Rails, I need to pass a bunch of other models into the view so I can define the correct associations. All I’m really doing here is querying the persistence layer for models and assigning them to instance variables, but I’m having to do it twice. What’s the best way of extracting the assignment? Just creating a controller method and calling it? I’m not sure.
class ProjectsController < ApplicationController
def new
@project = Project.new
@companies = Company.as_select_options
@project_varieties = ProjectVariety.as_select_options
@prices = Price.as_select_options
@users = User.as_select_options
end
def create
@project = Project.new(project_params)
@companies = Company.as_select_options
@project_varieties = ProjectVariety.as_select_options
@prices = Price.as_select_options
@users = User.as_select_options
if @project.save
flash[:success] = "#{@project.name} has been created."
redirect_to @project
else
render :new
end
end
end
A:
Simplest thing would be to add a protected method to the controller and a matching before_filter/before_action callback:
Your actions become:
class ProjectsController < ApplicationController
before_filter :load_select_options, only: [:new, :create]
def new
@project = Project.new
end
def create
@project = Project.new(project_params)
if @project.save
flash[:success] = "#{@project.name} has been created."
redirect_to @project
else
render :new
end
end
protected
def load_select_options
@companies = Company.as_select_options
@project_varieties = ProjectVariety.as_select_options
@prices = Price.as_select_options
@users = User.as_select_options
end
end
If you have an edit action, add that to the before-filter's list too.
However, since you (presumably) only need to load all that stuff in create if you have to render the new template, you can skip the before filter, and instead do something more targeted:
class ProjectsController < ApplicationController
def new
@project = Project.new
load_select_options
end
def create
@project = Project.new(project_params)
if @project.save
flash[:success] = "#{@project.name} has been created."
redirect_to @project
else
load_select_options
render :new
end
end
protected
def load_select_options
@companies = Company.as_select_options
@project_varieties = ProjectVariety.as_select_options
@prices = Price.as_select_options
@users = User.as_select_options
end
end
A:
I personally think that assigning that many instance variables for a view is a bit wrong - one other option is to create an PORO class to represent the role.
class SelectionOptions
def company_select_options
Company.as_select_options
end
def project_variety_options
ProjectVariety.as_select_options
end
# etc...
end
This is then assigned in the controller (copied from other answer).
class ProjectsController < ApplicationController
before_filter :set_select_options, only: [:new, :create]
def new
@project = Project.new
end
def create
@project = Project.new(project_params)
if @project.save
flash[:success] = "#{@project.name} has been created."
redirect_to @project
else
render :new
end
end
protected
def set_select_options
@select_options = SelectOptions.new
end
end
And then used in the view, however your view is coded.
<%= options_for_select @select_options.company_select_options %>
|
[
"mathematica.stackexchange",
"0000128266.txt"
] | Q:
Different Plot's output defining a function
I want a plot with this function
-((H^3 T)/(200 Pi)) + (H^2 T^2)/96 + H^4 l + H^2 m
with
rule = {l -> 27/(40000 Pi^2), m -> -85, T -> 106}
So if I give as input
f = -((H^3 T)/(200 Pi)) + (H^2 T^2)/96 + H^4 l +
H^2 m /. {l -> 27/(40000 Pi^2), m -> -85, T -> 106}
and plot
Plot[Log[Abs[f]] Sign[f], {H, 0, 4000}, PlotRange -> All]
the plot line is not continuous.
Instead, if I didn't set f and I explicitly write
Plot[Log[Abs[-((H^3 T)/(200 Pi)) + (H^2 T^2)/96 + H^4 l +
H^2 m /. {l -> 27/(40000 Pi^2), m -> -85, T -> 106}
]] Sign[-((H^3 T)/(200 Pi)) + (H^2 T^2)/96 + H^4 l +
H^2 m /. {l -> 27/(40000 Pi^2), m -> -85, T -> 106}
], {H, 0, 4000}, PlotRange -> All]
the plot line is continuous.
Why there is this difference?
Is there a way to obtain a continuous line labelling the function as f?
A:
Why the difference arises
The difference arises because Log[f] (as well as Sign[f]) is discontinuous where f == 0, which Plot computes symbolically and gets different results in the two Plot codes. There should be vertical asymptotes at those points. (So Exclusions -> None is not a fix, as it gives a continuous graph.)
Here are the exclusions computed by Plot in each of the OP's cases:
Visualization`ExpandExclusions[Log[f] Sign[f], {H}, Automatic]
(*
{{Im[(769 H^2)/24 + (27 H^4)/(40000 π^2) - (53 H^3)/(100 π)] == 0,
Re[(769 H^2)/24 + (27 H^4)/(40000 π^2) - (53 H^3)/(100 π)] <= 0},
{(769 H^2)/24 + (27 H^4)/(40000 π^2) - (53 H^3)/(100 π) == 0, True},
{(769 H^2)/24 + (27 H^4)/(40000 π^2) - (53 H^3)/(100 π) == 0, True}}
*)
Visualization`ExpandExclusions[
Log[Abs[-((H^3 T)/(200 Pi)) + (H^2 T^2)/96 + H^4 l + H^2 m /.
{l -> 27/(40000 Pi^2), m -> -85, T -> 106}]] *
Sign[-((H^3 T)/(200 Pi)) + (H^2 T^2)/96 + H^4 l + H^2 m /.
{l -> 27/(40000 Pi^2), m -> -85, T -> 106}],
{H}, Automatic]
(*
{}
*)
One can see these calls with
Trace[
(* plot command *),
_Visualization`ExpandExclusions,
TraceInternal -> True]
Update:
The reason no exclusions are produced is that the unevaluated expression being plotted contains the member ReplaceAll of the following blacklist, which prevents further analysis of potential exclusions:
Visualization`DiscontinuityDump`$BlackList
(*
{CompiledFunction, InterpolatingFunction, LinearSolveFunction,
NearestFunction, TransformationFunction, NIntegrate, NSum, NDSolve,
FindRoot, FindMinimum, FindMaximum, NMinimize, NMaximize, FixedPoint,
FixedPointList, Nest, NestList, Fold, FoldList, NestWhile,
NestWhileList, Apply, Map, Table, Do, For, While, Set, SetDelayed,
Decrement, PreDecrement, Increment, PreIncrement, Rule, RuleDelayed,
ReplaceAll, ReplaceRepeated, Replace, Nearest}
*)
Getting an accurate plot
The order of growth of f at the asymptotes is very small, and to get large enough values of f that look like asymptotes, it takes values of H that are closer to the singularities than MachinePrecision will allow.
For instance, to get a magnitude greater than 40, we have to be within roughly 10^-24 of the singularities:
Block[ (* H near singularities *)
{H = H + {-1*^-24, 1*^-24} /.
NSolve[f == 0 && 0 < H < 4000, H, WorkingPrecision -> 32]},
Log[Abs[f]] Sign[f]
]
(* {{-46.5568, 46.5568}, {41.780, -41.780}} *)
It is impractical to do this with Plot, since it automatically controls selection of the sample points. I will use ListLinePlot instead. If we include the points above, as well as one close to H == 0 where there is another asymptote, we can get this picture:
ListLinePlot[
Table[{H, Log[Abs[f]] Sign[f]},
{H, Union[Range[1.`32*^-20, 4000, 5],
Flatten[H + {-1*^-20, 1*^-20} /.
NSolve[f == 0 && 0 < H < 4000, H, WorkingPrecision -> 32]]]}],
PlotRange -> {-25, 25}]
It's virtually impossible to mark up the asymptotes because the graph is so close to them. (One can try dashing or transparency, but the result is not very good, imo.)
|
[
"stats.stackexchange",
"0000466646.txt"
] | Q:
Is multinomial logistic regression really the same as softmax regression
Multinomial logistic regression (MLR) is an extension of logistic regression for more than $2$ classes. The extension is made up by keeping linear boundaries between classes and using the class $K$ as pivot:
$$\log \frac{Pr(G=i)}{Pr(G=K)} = \beta_i x$$
Now since everything has to sum up to 1:
$$\sum_{i=1}^K Pr(G=i) = 1\Rightarrow \sum_{i=1}^{K-1} e^{\beta_i x}Pr(G=K) +Pr(G=K) \Rightarrow Pr(G=K) = \frac{1}{1+\sum_{i=1}^{K-1} e^{\beta_i x}}$$
Softmax on the contrary assumes for all classes:
$$Pr(G=i)= \frac{1}{C}e^{\beta_i x}$$
where $C$ is a constant. Forcing to sum up to one:
$$C= \sum_{i=1}^K e^{\beta_ix}$$
so:
$$Pr(G=i)= \frac{1}{\sum_{i=1}^K e^{\beta_ix}}e^{\beta_i x}$$
Now here are the things that aren't clear to me:
How are they said to be the same if they do not even have the same parameters? By using class $K$ as pivot, MLR does not have parameters $\beta_K$, while Softmax has.
If they are the same, can someone prove to me?
If they aren't the same, I assume the boundaries cannot be the same: are they at least similar?
A:
Softmax and logistic multinomial regression are indeed the same.
In your definition of the softmax link function, you can notice that the model is not well identified: if you add a constant vector to all the $\beta_i$, the probabilities will stay the same. To solve this issue, you need to specify a condition, a common one is $\beta_K = 0$ (which gives back logistic link function). But you can of course specify something else, like the sum of $\beta_i$ to be $0$ for example. Then the parameters of the softmax regression will be different from the parameters of the logistic regression, but there will be a one to one transform to go from one to another. Meaning that making inference with one model is equivalent.
|
[
"stackoverflow",
"0037056860.txt"
] | Q:
Text encoding for sound recording copyright symbol in a .tsv file
I'm having an issue where I can't get this symbol "℗" to render in a .tsv file. I'm using powershell to add data to my .tsv that has copyright info, so I need to have it copy over correctly. I use add-content -path C:\blah and I include the -encoding parameter at the end, but all of the encoding choices I've tried cannot render this sound recording copyright symbol. Anyone have any idea if this can work? UTF8 and UTF32 render the © symbol correctly, for what it's worth. What's also sort of funny is that Powershell ISE can render the symbol correctly.
Thanks for any help.
Edit: I'm thinking now it may just be a limitation of the .tsv file? I just tried opening the .tsv in excel, pasting the "℗" symbol into an empty cell, and when I save and re-open the file, the "℗" is replaced by a "?".
Edit 2: If I use import-csv -path -delimiter and import the .tsv, the symbol does render correctly in Powershell. I would like it to render correctly in Excel if that's possible. I also tried to load it using Google Sheets but it had the same problem.
A:
This turns out to be sort of a workaround I think, but the end result is what I wanted.
There seems to be some type of problem with excel and the encoding of .tsv, .csv, etc. So what I found worked for me was opening whatever file I wanted (in Excel), saving as a "Unicode .txt", then just renaming the file extension from .txt to .tsv. The file still opens correctly as a .tsv spreadsheet, but for whatever reason the encoding works now so that the "℗" saves correctly. And it looks like I only need to do this once and I can keep appending to the same .tsv. Strange solution, but I'll take it.
And just to clarify, if I open the file as a .txt in Excel, it also retains the correct formatting. I just changed the extension to .tsv so that a double click will open to Excel rather than notepad. I believe the file is technically still saved as a Unicode .txt file.
|
[
"stackoverflow",
"0059897013.txt"
] | Q:
Why fetch() uses ReadableStream for body of the response?
I was learning fetch() and found out that the body of the response uses something called readableStream. According to my research, readable stream allows us to start using the data once it gets being downloaded from the server(I hope I am correct:)). In terms of fetch(), how can readable stream be useful with fetch(), that is, we anyways need to download all of the data then start using it. Overall, I just cannot understand the point of readable stream in fetch() thus I need your kind help:).
A:
Here's one scenario: a primitive ASCII art "video player".
For simplicity, imagine a frame of "video" in this demo is 80 x 50 = 4000 characters. Your video "decoder" reads 4000 characters, displays the characters in a 80 x 50 grid, reads another 4000 characters, and so on until the data is finished.
One way to do this is to send a GET request using fetch, get the whole body as a really long string, then start displaying. So for a 100 frame "video" it would receive 400,000 characters before showing the first frame to the user.
But, why does the user have to wait for the last frame to be sent, before they can view the first frame? Instead, still using fetch, read 4000 characters at a time from the ReadableStream response content. You can read these characters before the remaining data has even reached the client.
Potentially, you can be processing data at the start of the stream in the client, before the server has even begun to process the data at the end of the stream.
Potentially, a stream might not even have a defined end (consider a streaming radio station for example).
There are lots of situations where it's better to work with streaming data than it is to slurp up the whole of a response. A simple example is summing a long list of numbers coming from some data source. You don't need all the numbers in memory at once to achieve this - you just need to read one at a time, add it to the total, then discard it.
|
[
"stackoverflow",
"0059117365.txt"
] | Q:
Getting 307 statuscode when using POST with axios
I have a React application, where I use axios to send a post request to a .NET Core 3.0 endpoint.
The controller method I want to hit at the route http://localhost:500/api/auth/register looks as follows:
[HttpPost("register")]
public async Task<IActionResult> Register(UserForRegisterDto userForRegisterDto)
{
var userToCreate = _mapper.Map<User>(userForRegisterDto);
var result = await _userManager.CreateAsync(userToCreate, userForRegisterDto.Password);
var userToReturn = _mapper.Map<UserForDetailedDto>(userToCreate);
if (result.Succeeded)
{
return CreatedAtRoute("GetUser",
new { controller = "Users", id = userToCreate.Id }, userToReturn);
}
return BadRequest(result.Errors);
}
In my react frontend I send the request using axios
export const startLogin = (password, username) => {
return dispatch => {
axios
.post(baseUrl + '/api/auth/register', {
username,
password
})
.then(data => dispatch(login(data.data.token)));
};
};
But when I send the request the server responds with a 307 Temporary rediret.
The registration works fine when I issue the request using postman for some reason. Anyone know what might be the issue?
A:
You need to remove app.UseHttpsRedirection(); in startup.cs
|
[
"stackoverflow",
"0057450362.txt"
] | Q:
How would a WHEN condition work in a JOIN query?
Where's the problem in this query?
I can't figure out a way to make it work.
I need to select just the 'ricette' with id = 4 and then join with the other table 'val'. I don't know where to put the WHERE condition 'WHERE ric.id = 4'. Without 'WHERE ric.id = 4' the query works, but obviously it returns me all the values in the table 'ricette' and not just the 'ricette.id = 4'.
This is the query:
SELECT * FROM ricette AS ric WHERE ric.id = 4
LEFT JOIN (SELECT id_ricetta, AVG(valutazione) AS media
FROM `valutazioni` GROUP BY id_ricetta) AS val ON ric.id = val.id_ricetta
Thank you in advance.
A:
JOIN is an operator not a clause in SQL. It is only recognized in the FROM clause.
It operates on tables/derived tables/views. So, your query should be written with the LEFT JOIN in the FROM clause:
SELECT *
FROM ricette AS ric LEFT JOIN
(SELECT id_ricetta, AVG(valutazione) AS media
FROM `valutazioni`
GROUP BY id_ricetta
) AS val
ON ric.id = val.id_ricetta
WHERE ric.id = 4 ;
You will note that when I write code, if an expression with an operator spans more than one line, I put the operator at the end of the line. To me, this makes it clear that the expression spans multiple lines.
On the other hand, I left align the clauses in a SQL query. These are generally: SELECT, FROM, WHERE, GROUP BY, HAVING, and ORDER BY.
As a note, the following is a more performant version of your query:
SELECT r.*,
(SELECT AVG(v.valutazione)
FROM valutazioni v
WHERE v.id_ricetta = r.id
) as media
FROM ricette r
WHERE ric.id = 4 ;
The reason for this is simple. You are filtering the results in the query. However, your version is aggregating all of valutazationi -- which is (presumably) much more data and just id = 4. An index on valutazioni(id_ricetta, valutazione) would speed it up even more.
|
[
"photo.stackexchange",
"0000092494.txt"
] | Q:
Focus problems with all lens
Suddenly having a problem with out of focus or very very soft images.
Doesn't matter if I use a Canon 50mm f/1.8 or a Sigma 70-200 or 24-70.
I'm guessing user error or camera.
Example image from a Canon 6D 50mm Canon f/1.8 lens, ISO320, f/3.2 1/200.
Using back button focus, seems to focus fine and then I look at the image, zoom in and it's terrible.
What in the world could be the problem?
I would think a shutter speed of 1/200 for a 50mm shouldn't produce camera shake?
A:
When diagnosing focus/sharpness issues the first step should always be to eliminate as many of the possible contributing factors as one can. This would include shooting a well lit target square with the camera from a solid mount, such as a tripod. The test target should be sufficiently lit to allow shutter times in the 1/500 second neighborhood at ISO 100 with the lens wide open.
Your sample image makes it difficult to diagnose much. There are certainly parts of the image that are fairly sharp: The subject's feet and the boards underneath them, the subject's left hand, and other areas of the frame are all acceptably in focus. But with the camera not square to the wall nor the door frame it is really hard to pin down exactly what is going on.
It looks like there might be some tilt involved, perhaps combined with some front focus, but that could just as easily be due to the skewed position of the camera and the lens' optical axis with respect to the presumably flat parts of the scene as it might be due to a sensor/mounting flange/lens alignment issue.
Has the camera been dropped or taken a hard bump recently?
Did you use a 'focus and recompose' technique?
What AF point was selected and what part of the frame was it over when you focused?
What AF mode was used? AI Servo, One Shot, or AI Focus which is a hybrid of the other two and fairly unpredictable?
For a much more comprehensive list of issues that may result in soft or out of focus images, please see this answer to How do I diagnose the source of focus problem in a camera?
|
[
"stackoverflow",
"0018366023.txt"
] | Q:
Choose implementation at compile time in F#
Most programming languages have some way of choosing an implementation at compile-time based on types. Function overloading is a common way of doing this. Using templates (in C++ or D possibly with constraits) is another option.
But in F#, I cannot find out how to do this without using class methods, and thus loosing some nice properties like currying.
let f (a:int) =
Gives Duplicate definition of 'f'
F# has statically resolved type parameters, but I don't how I can use this..
let f (a:^T) =
match T with
Gives The value or constructor of T is not defined at match T
let f (a:^T) =
match a with
| :> int as i ->
Gives Unexpected symbol ':>' in expression
let f (a:^T) =
match ^a with
| :> int as i ->
Gives Unexpected infix operator in expression
A:
If you want to write a function that behaves differently for different types and is an ordinary F# function, then static member constraints let you do that. However, if you want to write idiomatic F# code, then there are other options:
Here is a good example showing how you can use static member constraints to do this
F# collections use different module for each type, so there is Array.map, List.map, Seq.map etc. This is idiomatic style for functional F# libraries.
FSharpChart is an example of a library that uses overloaded methods. Note that you can use static methods, so you can write Chart.Line [ ... ] and it will pick the right overload.
If you want to write generic numeric code, then I recently wrote a tutorial that covers this topic.
So, I would be a bit careful before using static constraints - it is not entirely idiomatic (e.g. not commonly used in standard libraries) and so it may cause some confusion. But it is quite powerful and certainly useful.
The key is that simply following patterns that work well in other languages might not give you the best results in F#. If you can provide a concrete example of what you're trying to do, then you might get a better results.
|
[
"stackoverflow",
"0011078059.txt"
] | Q:
sending email from wcf web service
I have a WCF web service. I want to send email from this service. There are libraries for sending email from WCF?
I want that library have simple scheduler, for example, send every hour or something like that. Can't find anything.
A:
.NET's SmtpClient works fine from a WCF web service.
You can use Task Scheduler or Quartz.Net to schedule sending of emails.
|
[
"stackoverflow",
"0044534439.txt"
] | Q:
Trying to create application using asp.net mvc with entity framework,but returns zero record
I am trying to learn asp.net mvc by doing hands-on. So i started with very small web application using asp.net mvc along with entity framework to access sql server.
I created one table in sql server localdb
USE Ourlifestory ;
GO
create table staticlocations(
LocationID int primary key,
LocationName varchar(30),
Tripdate datetime,
Locationimage image)
Below is the connection string
<add name="OurLifeStoryDBContext" providerName="System.Data.SqlClient" connectionString="Data Source=(LocalDB)\v11.0; Initial Catalog=Ourlifesotry; Integrated Security=True;" />
DBContext class :
public class OurLifeStoryDBContext : DbContext
{
public OurLifeStoryDBContext()
: base("name=OurLifeStoryDBContext")
{
}
public DbSet<StaticLocations> Location { get; set; }
}
Model class for one table :
[Table("staticlocations")]
public class StaticLocations
{
[Key]
[Column(Order = 1)]
public int LocationID { get; set; }
[Column(Order = 2)]
public string LocationName { get; set; }
[Column(Order = 3)]
public DateTime Tripdate { get; set; }
//[Column(TypeName="image")]
//public byte[] LocationImage { get; set; }
}
Controller action method accessing this dbcontext :
public ActionResult GetLocationDetails(int locationID)
{
var dbContext = new OurLifeStoryDBContext();
var location = dbContext.Location.ToList();
return View("_GetLocationDetails");
}
But in above action method on the below line ,
var location = dbContext.Location.ToList();
i am getting zero records though i am very i have one record which i manually inserted through insert statement in database.
Any thoughts what i am missing here?
A:
Here is why nothing was returned, there was a typo in your connection string:
Initial Catalog=Ourlifesotry;
in your connection string should be
Initial Catalog=Ourlifestory;
Why did it not give any "error" or indication?
That's because Entity Framework cannot detect if you spelled your database name wrong. For all it knows, your database could be called "misssssspelled".
Also, when there is no existing database with the provided name, instead of an error value/warning, it will simply return nothing instead.
|
[
"stackoverflow",
"0013970260.txt"
] | Q:
T4 template VS2010 get host assembly
I want to get a reference to assembly of a project in which T4 template is. I know I can get path to the project with, for example Host.ResolveAssemblyReference("$(ProjectDir)") and I could maybe add bin\\debug\\{projectName}.dll because my assembly names are named by project name, but that is not always the case and I'm creating reusable template, so I need path to dll or most preferably Assembly instance.
I have also found how to get reference to the Project as explained here in method GetProjectContainingT4File, but then what?
Is there a way to get it?
BTW, I need that reference to access specific types and generate some code from them.
A:
Following simple code worked for me (VS 2013):
var path = this.Host.ResolveAssemblyReference("$(TargetPath)");
var asm = Assembly.LoadFrom(path);
Also you can find $(...) properties in project psot build steps editor.
A:
Ok, I managed to get the needed reference to assembly @FuleSnabel gave me a hint, although I did't use his suggestion.
Here's a part of my T4 template:
<#@ template debug="true" hostSpecific="true" #>
<#@ output extension=".output" #>
<#@ Assembly Name="System.Core.dll" #>
<#@ Assembly Name="System.Windows.Forms.dll" #>
<#@ Assembly Name="System.Xml.Linq.dll" #>
<#@ Assembly Name="Microsoft.VisualStudio.Shell.Interop.8.0" #>
<#@ Assembly Name="EnvDTE" #>
<#@ Assembly Name="EnvDTE80" #>
<#@ Assembly Name="VSLangProj" #>
<#@ import namespace="System" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Diagnostics" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Xml.Linq" #>
<#@ import namespace="System.Collections" #>
<#@ import namespace="System.Reflection" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ import namespace="Microsoft.VisualStudio.TextTemplating" #>
<#@ import namespace="Microsoft.VisualStudio.Shell.Interop" #>
<#@ import namespace="EnvDTE" #>
<#@ import namespace="EnvDTE80" #>
<#@ include file="T4Toolbox.tt" #>
<#
Project prj = GetProject();
string fileName = "$(ProjectDir)bin\\debug\\" + prj.Properties.Item("OutputFileName").Value;
string path = Host.ResolveAssemblyReference(fileName);
Assembly asm = Assembly.LoadFrom(path);
// ....
#>
// generated code goes here
<#+
Project GetProject()
{
var serviceProvider = Host as IServiceProvider;
if (serviceProvider == null)
{
throw new Exception("Visual Studio host not found!");
}
DTE dte = serviceProvider.GetService(typeof(SDTE)) as DTE;
if (dte == null)
{
throw new Exception("Visual Studio host not found!");
}
ProjectItem projectItem = dte.Solution.FindProjectItem(Host.TemplateFile);
if (projectItem.Document == null) {
projectItem.Open(Constants.vsViewKindCode);
}
return projectItem.ContainingProject;
}
#>
So, to find right path to assembly I had to get reference to the project in GetProject() method and then use project's property OutputFileName with prj.Properties.Item("OutputFileName").Value. Since I couldn't find anywhere what properties project has, I used enumeration and a loop to inspect Properties collection and then found what I need. Here's a loop code:
<#
// ....
foreach(Property prop in prj.Properties)
{
#>
<#= prop.Name #>
<#
}
// ....
#>
I hope this will help someone.
|
[
"electronics.stackexchange",
"0000150664.txt"
] | Q:
Robotics Experts: How to do trajectory planning for a robotic arm
I have a homemade 3D printed prosthetic robotic arm similar to the picture below
Right now this is completely open loop control system. The elbow, wrist and base can only rotate in around its primary axis. I enter the angle that the base, elbow and wrist should rotate and it performs that action - nothing else happens. Each of the finger is actuated by a motor and that enables grip and release of each fingers and this is controlled by an entirely different control system at a much lower output voltage than the wrist, elbow and base motors.
But I do not feel that very sophisticated. I wish modify this arm so to make it more "industry" purpose.
What should I do to get started in developing a control algorithm for the trajectory of this arm?
Some thoughts: maybe add some "sensors" that provides angle information of the arm in space? Can someone who is familiar with robotics provide me with some information as to how this is done?
A:
One "quick" solution (not necessarily that quick, but quicker and more robust than starting from scratch) is to use something like Blender to generate position information for a configuration you want your robot to take; Basically your input is
two or more states the are should be in, and blender can compute the necessary angles and interpolations between the two. Since Blender is scriptable using Python, you might even be able to drive the robotic arm "real-time" from inside Blender: obligatory Youtube video.
If your interested in how to write code which could do the solving part for you, the key algorithm to understand is inverse kinematics. This is all about finding the joint parameters given a desired position/orientation. It is not something which is easy to solve (it requires solving non-linear systems of equations), but is certainly do-able.
Close-loop feedback is not necessarily required, however you do need to be able to accurately position limbs. One open loop solution is to use stepper motors and a known "home" position. The software can track what it thinks the current orientation of the arm is by simply tracking what operations it did moving away from the home position.
|
[
"stackoverflow",
"0052444812.txt"
] | Q:
Getting user info from request to Cloud Function in Firebase
I'd like to be able to get user information for a Cloud Function that gets called in Firebase. I have something like this, which is being called after the user signs into the app:
exports.myFunction = functions.https.onRequest((req, res) => { ... }
And I need to get the uid of the signed-in user that made the request. Any advice or comments please?
A:
Information about the user that makes the request is not automatically passed along with HTTPS functions calls.
You can either pass the ID token along yourself and decode/verify it in your function, or you can use a callable HTTPS function, which passes the user information along automatically.
|
[
"stackoverflow",
"0011287340.txt"
] | Q:
comparing array value with previous array value
how do I compare the current array value with previous array value.. Example if I have the following array and want to compare [BM1367 PD C 70][ST00576]['transferfrom'] with previous array which is [BM1367 PD B 85][ST00576]['transferfrom'] ?
[BM1367 PD B 85] => Array
(
[ST00576] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 102
[refno] =>
)
[OT01606] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 66
[refno] => 102 - ST00576
)
)
[BM1367 PD C 70] => Array
(
[ST00576] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 102
[refno] =>
)
[OT01606] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 66
[refno] => 102 - ST00576
)
)
[BM1367 PD C 85] => Array
(
[ST00576] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 102
[refno] =>
)
[OT01606] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 66
[refno] => 102 - ST00576
)
)
[BM1367 PD D 85] => Array
(
[ST00576] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 102
[refno] =>
)
[OT01606] => stdClass Object
(
[transferfrom] => 102
[transferto] => 66
[BR_ID] => 66
[refno] => 102 - ST00576
)
)
)
A:
You asked:
how do I compare the current array value with previous array value
I think you may want to look at the following PHP functions
current() - http://www.php.net/manual/en/function.current.php
prev() - http://www.php.net/manual/en/function.prev.php
next() - http://php.net/manual/en/function.next.php
For example:
<?php
$transport = array('foot', 'bike', 'car', 'plane');
$mode = current($transport); // $mode = 'foot';
$mode = next($transport); // $mode = 'bike';
$mode = next($transport); // $mode = 'car';
$mode = prev($transport); // $mode = 'bike';
$mode = end($transport); // $mode = 'plane';
?>
I was able to use those on a computation formula, work's great! It is also useful when comparing current from previous or next array on yours.
|
[
"stackoverflow",
"0017128716.txt"
] | Q:
Using Javascript to load .tpl displays it wrong
I am trying to load some code with Javascript on a button click. The problem is that some of the code is just displayed instead of executed.
This is the file reload.tpl I am trying to load:
<div class="online-players">
{% for player in players_online %}
<div class="online-player-heads">
<a href="?page=player&name={{ player.getName }}">
{{ player.getPlayerHead(64, 'img-polaroid', true)|raw }}
</a>
</div>
{% else %}
<div class='force-center'><em>{{ 'no_players_online'|trans }}</em></div>
{% endfor %}
</div>
I use this to load the code:
<script>
$(document).ready(function() {
$('#button2').click(function() {
$('.online-players').fadeOut('slow', function() {
$(this).load('templates/default/views/reload.tpl', function() {
$(this).fadeIn("slow");
});
});
});
});
</script>
Expected:
http://i.stack.imgur.com/CRMGn.png
What actually is happening:
http://i.stack.imgur.com/8UvG7.png
Anyone knows why it's doing this? When I am not trying to load through a script it's working just fine. but as soon as I try the .load, it just doesnt display right..
Help? :)
A:
In case you did not discover by yourself, this is what you need to do:
Change target page on jquery load to a php file (in order to proper server-side handle):
<script>
$(document).ready(function() {
$('#button2').click(function() {
$('.online-players').fadeOut('slow', function() {
$(this).load('reload.php', function() {
$(this).fadeIn("slow");
});
});
});
});
</script>
Inside reload.php file:
require_once("./libs/Smarty.class.php");
$smarty = new Smarty;
$smarty->display("templates/default/views/reload.tpl");
I hope this will help you.
Regards,
|
[
"stackoverflow",
"0009502721.txt"
] | Q:
Accessing Control/Form-Objects through Classes leads to reinitialization
I have a main form with a register and a few subforms. I'm using class modules and I save the name of the forms in it, to make the access to them easy. The corresponding variable to access the class gets saved in a module and is set (new) on_load in a form (clsMod). Before the first access my main form calls a function which 'initializes' the values in the class module (initial_form), to make them accessible. That works like a charm, so far.
But when I now try to access the value, f.e. with clsMod.detailsControl or clsMod.detailsControl!fieldXy my class module gets initialized again a thus looses all bound objects. I suppose I am not allowed to use Controls / Forms like that? There is no error, except of course 'Object variable, or with block variable not set', which occurs afterwards.
Private m_ctldetailsControl As control
Public Sub initial_form()
Set detailsControl = Forms!mainForm_ufoMainForm
End Sub
Public Property Get detailsControl() As control
Set detailsControl = m_ctldetailsControl
End Property
Public Property Set detailsControl(ctlDetailsControl As control)
Set m_ctldetailsControl = ctlDetailsControl
End Property
I narrowed it down to the fact that the class module just gets initialized again, when I access the control-object from the 'outside' (I put a timestamp in the Class_Initialize() and can see when there is a new initialization), I just don't know why. Same happens when I use Form-Objects instead of Control-Objects.
I can eliminate the my code resets the class-module, because it only gets set once during the load process (set clsMod = new clsModification). Everything else inside that class works fine, I can access the property from inside the class without reinitializing itself.
Any ideas or further reading regarding this topic would be greatly appreciated, for any other details just ask!
A few additions:
The class variable is located as "public clsMod As clsModuleXy" in a module
it gets set in the onLoad Event of my form (set clsMod = new clsModuleXy)
set-property works fine (as descriped above)
get property works fine inside the class-module (as descriped above)
when I use the get property outside of the class module a new instantiation happens (if I set a local control/form to that property or want to access a field)
A:
I'm guessing that the culprit is that you have declared an instance of this class module As New. I obviously don't know what the rest of your code looks like, but I imagine the whole process is working something like this:
An instance of this object is declared As New (ie, Dim clsMod As New initial_form).
The Class_Initialize() procedure runs when the new instance (clsMod) is created.
Something causes this instance (clsMod) of the object to go out of scope.
The VBA garbage collector cleans up this object instance (clsMod) which is no longer in use.
The Class_Terminate() procedure runs when the instance (clsMod) is cleaned up.
You try to access clsMod. That variable is Nothing because the GC cleaned it up. However, you declared it As New so a brand new instance of initial_form is created and assigned to the object variable clsMod.
The Class_Initialize() procedure runs again for this brand new instance.
Without seeing the rest of your code, I can't say for sure this is the problem. But based on the symptoms you posted, this would explain the behavior.
|
[
"stackoverflow",
"0050756174.txt"
] | Q:
how to add two numbers in typescript using interfaces
here I'm trying to add two numbers using typescript interfaces and arrow functions concept. when I'm running the compiled js code it is calling add() method and it is displaying NaN.
below is the ts code:-
var add = (Num) => console.log(Num.num1 + Num.num2) ;
interface Num {
num1: number;
num2: number;
}
this.add(1,2);
below is the js code:-
var add = function (Num) { return console.log(Num.num1 + Num.num2); };
this.add(1, 2);
Thanks in advance
A:
Root Cause: No Type Annotation
While other answers solve the symptom, I don't believe they have addressed the underlying problem in your code, which is here:
var add = function (Num) { return console.log(Num.num1 + Num.num2); };
The parameter here is called Num, and has no type. I believe you intended this to be:
// name: type
const add = function (num: Num) { return console.log(num.num1 + num.num2); };
Calling Add
Your code isn't in a particular context, so let's drop this for a second and look at the call to the add function:
// ERROR: Expected 1 arguments, but got 2.
// const add: (num: Num) => void
add(1, 2);
TypeScript is now helping you to see the error in your call. Solving the call to add without solving your root cause fixed the symptom, but not the underlying issue.
You can now update your call to add with:
add({ num1: 1, num2: 2 });
And it will work - but so will all future calls; because your add function now had type information.
Demo
Drop the following into a TypeScript file, or the TypeScript Playground, to see this in action:
interface Num {
num1: number;
num2: number;
}
const add = function (num: Num) { return console.log(num.num1 + num.num2); };
add(1, 2);
add({ num1: 1, num2: 2 });
add(1, 2)
If you want to have an add method that allows the signature add(1, 2) you need to change the signature... the fully annotated function itself can be described as:
const add = function (a: number, b: number): number {
return a + b
};
If you want to see what the interface looks like for this, it is below:
interface Adder {
(num1: number, num2: number): number;
}
const add: Adder = function (a, b) {
return a + b;
};
Super Add
If you want your add function to handle more numbers, you can use a rest parameter... like this:
const add = function (...numbers: number[]) {
if (!numbers || !numbers.length) {
return 0;
}
return numbers
.reduce(function (acc, val) {
return acc + val;
});
};
console.log(add());
console.log(add(1, 2));
console.log(add(1, 2, 3, 4));
This will handle any number of arguments.
|
[
"stackoverflow",
"0008341473.txt"
] | Q:
What is the syntax for adding multiple strings to one variable in Java?
Okay the thing i want to do is make a variable like Line = "hey","you",Mom".
Then later I want to be able to call Line 0 and get the string "hey".
I have something like this:
String[] history = new String "hey","you","Mom";
public String getLine(int index)
{
if (index < 0 || index >= this.history.length)
return null;
}
But this is not working..
How do i make this list? I'm new with the syntax in java.
A:
It's
String[] history = new String[] { "hey", "you", "Mom" };
|
[
"math.stackexchange",
"0002005205.txt"
] | Q:
Prove that there are infinitely many numbers that cannot be expressed as the sum of three cubes
Prove that there are infinitely many numbers that cannot be expressed as the sum of three cubes.
I thought this involved looking at cubes mod 7 but that doesn't work as they can be 0,+-1 so you can make any number mod7..... ok same thing different story for mod9.
Can this be solved using Fermat's little theorem?
X^6is congruent to 1 mod 7
express the three cubes as (X^2)^3.
No actually this doesn't work as this is just the same as the method for working mod9 on the cubes, with out the negative for squared, and also I think that Fermat's little theorem assumes that x is not a multiple of 7.
A:
Any number $m\equiv\pm4\pmod9$ cannot be expressed as a sum of $3$ cubes:
$n\equiv0\pmod9 \implies n^3\equiv0^3\equiv 0\pmod9$
$n\equiv1\pmod9 \implies n^3\equiv1^3\equiv+1\pmod9$
$n\equiv2\pmod9 \implies n^3\equiv2^3\equiv-1\pmod9$
$n\equiv3\pmod9 \implies n^3\equiv3^3\equiv 0\pmod9$
$n\equiv4\pmod9 \implies n^3\equiv4^3\equiv+1\pmod9$
$n\equiv5\pmod9 \implies n^3\equiv5^3\equiv-1\pmod9$
$n\equiv6\pmod9 \implies n^3\equiv6^3\equiv 0\pmod9$
$n\equiv7\pmod9 \implies n^3\equiv7^3\equiv+1\pmod9$
$n\equiv8\pmod9 \implies n^3\equiv8^3\equiv-1\pmod9$
No $3$ values chosen from $\{0,+1,-1\}$ will ever sum up to $\pm4\pmod9$.
|
[
"stackoverflow",
"0009439702.txt"
] | Q:
How to open magit-status in full window
I am using Magit to work with git in emacs. I have bound magit-status to a key, but every time I hit the key it opens in a split in lower half of the window and i have to hit C-x 1 to get it into a full window. How can I make it open in a full window by default?
A:
(setq magit-status-buffer-switch-function 'switch-to-buffer)
or via customize:
M-x customize-variable RET magit-status-buffer-switch-function RET
A:
For newer versions of magit you can use this sanctioned snippet:
(setq magit-display-buffer-function #'magit-display-buffer-fullframe-status-v1)
I combine it this to get a zen-like full window git status after switching projects:
(setq projectile-switch-project-action 'magit-status)
A:
Here is another way to achieve this:
(add-to-list 'same-window-regexps "\*magit: .*\*")
|
[
"stackoverflow",
"0010637909.txt"
] | Q:
Simulate button click in a Metro Style App
I am in the process of building a metro style app and I need to know if there is a way to fire button clicks programmatically.
I have this PasswordBox and button in my xaml:
<PasswordBox IsPasswordRevealButtonEnabled="True" KeyDown="On_text_KeyDown" x:Name="SomePW" FontSize="50" KeyDown="On_text_KeyDown" Margin="0,0,0,190" Height="67" Width="363"/>
<Button Background="White" x:Name="Button_Go" Foreground="Black" Margin="20,0,0,190" Content="Go" FontSize="20" Click="Go_click" Height="67" Width="60"/>
And in my C# code this is the function that handles the key press in the PasswordBox:
private void On_text_KeyDown(object sender, RoutedEventArgs e)
{
KeyEventArgs K = (KeyEventArgs)e;
if (K.Key == Windows.System.VirtualKey.Enter)
{
//<TO-DO> Simulate Button Click Here
}
}
The problem is I can't seem to find a way to simulate the button click... Can someone help please?
A:
Is there a problem with simply calling this or are you looking for a generic way to invoke a Click event on any button, perhaps to automate it?
Go_click(this, new RoutedEventArgs());
|
[
"stackoverflow",
"0061039700.txt"
] | Q:
Using flatten in pytorch v1.0 Sequential module
Due to my CUDA version being 8, I am using torch 1.0.0
I need to use the Flatten layer for Sequential model. Here's my code :
import torch
import torch.nn as nn
import torch.nn.functional as F
print(torch.__version__)
# 1.0.0
from collections import OrderedDict
layers = OrderedDict()
layers['conv1'] = nn.Conv2d(1, 5, 3)
layers['relu1'] = nn.ReLU()
layers['conv2'] = nn.Conv2d(5, 1, 3)
layers['relu2'] = nn.ReLU()
layers['flatten'] = nn.Flatten()
layers['linear1'] = nn.Linear(3600, 1)
model = nn.Sequential(
layers
).cuda()
It gives me the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-38-080f7c5f5037> in <module>
6 layers['conv2'] = nn.Conv2d(5, 1, 3)
7 layers['relu2'] = nn.ReLU()
----> 8 layers['flatten'] = nn.Flatten()
9 layers['linear1'] = nn.Linear(3600, 1)
10 model = nn.Sequential(
AttributeError: module 'torch.nn' has no attribute 'Flatten'
How can I flatten my conv layer output in pytorch 1.0.0?
A:
Just make a new Flatten layer.
from collections import OrderedDict
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
layers = OrderedDict()
layers['conv1'] = nn.Conv2d(1, 5, 3)
layers['relu1'] = nn.ReLU()
layers['conv2'] = nn.Conv2d(5, 1, 3)
layers['relu2'] = nn.ReLU()
layers['flatten'] = Flatten()
layers['linear1'] = nn.Linear(3600, 1)
model = nn.Sequential(
layers
).cuda()
|
[
"stackoverflow",
"0019997020.txt"
] | Q:
Missing “Deploy”-Button Visual Studio 2012
Yesterday I have installed Visual Studio 2012 Premium on my SharePoint 2013 Development Machine. I also installed the Office Developer Tools to get the SP2013 Project templates.
Opening my SP 2010 Solution File and converting to 2013 was successful.
My problem: There is no "Deploy"-Button in the Project-Context-Menu anymore. Everything is there: Build, Rebuild, Retract etc.
But no Deploy.
Asked this in the SharePoint Forum too
A:
I had the same problem.
I started by creating some solutions folders, and placing my SharePoint project in those. After moving a project from root --> frontend\ , the deploy menu item, in the context menu of the project disappeared.
I solved this, I had to move the SharePoint project to the root-folder of the solution, and the deployment feature re-appeared in the project context menu.
|
[
"drupal.stackexchange",
"0000215734.txt"
] | Q:
Catch first node in a view template
I created a view for the front page and also a node--view--frontpage.html.twig to design this view. Is there any opportunity to catch the first node with a if statement to design it differently?
In this case I want to change the order of the first iteration in the node--view--frontpage.html.twig:
{% set createdDate = node.getCreatedTime | date('j.m.Y') %}
<article{{ attributes }}>
{{ content.field_picture }}
<div class="meta">
{{ createdDate }}
{% if content.typ | render | trim %}
, {{ content.typ }}
{% endif %}
</div>
<h2 class="news" {{ title_attributes }}>
<a href="{{ url }}" rel="bookmark">{{ label }}</a>
</h2>
{{ content.body }}
</article>
A:
The display suite module has a views integration that allows you to choose a different display mode for the first node in a view versus all the others. In this case you can easily create a separate tpl for that view mode and you're good to go. Its extremely powerful. Take a look at this post and look for "Alternating view modes" in the last screen shot.
|
[
"stackoverflow",
"0040026856.txt"
] | Q:
Dynamic variable in throw statement fails with CS0155
I encountered a bizarre bug (?) while refactoring some code.
For some unknown reason, I get a compiler error when passing a dynamic variable to a static method in a throw statement.
dynamic result = comObj.getfoo(bar);
// ....
throw ApiException.FromResult(result);
Error CS0155
The type caught or thrown must be derived from System.Exception
This makes no sense as the returned value is derived from System.Exception.
Furthermore, what makes me lean toward this being a bug, is that error can be circumvented by either:
changing dynamic to var
var result = 0;
by casting the variable to object.
throw ApiException.FromResult((object)result);
Any ideas?
Live on dotnetfiddle
/* PS: code omitted for brevity */
using System;
public class Program
{
public class ApiException : Exception
{
public static ApiException FromResult(object result)
{
return new ApiException();
}
}
[STAThread]
public static void Main(string[] args)
{
dynamic result = 0;
throw ApiException.FromResult(result);
}
}
A:
This is "expected" behavior as any statement containing dynamic is treated as if its result type is dynamic, which can't be cast to Exception at compile time.
dynamic x = new Exception();
throw x; // fails to compile as x's type
// is not known to derive from `Exception` at compile time
throw (Exception)x; // works
Your workarounds change compile-time result from dynamic to proper type and hence compile correctly.
dynamic result =....
throw
// compile type here is "dynamic" as one parameter is `dynamic`
ApiException.FromResult(result);
This works because var is not dynamic (What's the difference between dynamic(C# 4) and var?)
var result = ... // any non dynamic type - staticly know at compile time
throw
// compile type here is ApiException because type of parameter is known
// at compile time
ApiException.FromResult(result);
Similar problem happen when you try to invoke extensions on dynamic result - Extension method and dynamic object
To address concern why "throw" is different from function call (that happily handle resolution at run-time): specification for throw statement requires type of expression to be Exception unlike for expressions including method invocation where there are rules for dynamic binding (7.2. Static and Dynamic Binding).
Interesting note that other statement like while are happy with dynamic value at compile time and I don't see reason why they behave differently.
Excerts from C# 5 spec
7.2 Static and Dynamic Binding
...
When an operation is dynamically bound, little or no checking is performed by the compiler. Instead if the run-time binding fails, errors are reported as exceptions at run-time.
The following operations in C# are subject to binding:
• Method invocation: e.M(e1,…,en)
...
8.9.5 The throw statement:
...
The expression must denote a value of the class type System.Exception, of a class type that derives from System.Exception or of a type parameter type that has System.Exception (or a subclass thereof) as its effective base class.
|
[
"stackoverflow",
"0021135692.txt"
] | Q:
How to convert list of objects with two fields to array with one of them using LINQ
It's hard for me to explain, so let me show it with pseudo code:
ObjectX
{
int a;
string b;
}
List<ObjectX> list = //some list of objectsX//
int [] array = list.Select(obj=>obj.a);
I want to fill an array of ints with ints from objectsX, using only one line of linq.
A:
You were almost there:
int[] array = list.Select(obj=>obj.a).ToArray();
you need to just add ToArray at the end
A:
The only problem in your code is that Select returns IEnumerable.
Convert it to an array:
int[] array = list.Select(obj=>obj.a).ToArray();
|
[
"math.stackexchange",
"0001271276.txt"
] | Q:
Guessing the color of a ball from the basket
Let's say I have a basket with green and red balls. $70\%$ of the balls are green (let's call the fraction $f_g = 0.7$), the rest is red - $f_r=0.3$.
Now, suppose I take one ball from the basket and guess its color (without looking, of course). What's the probability that my guess was wrong?
Supposedly the answer is $f_g(1-f_g)+ f_r(1-f_r)$, but I cannot convince myself this is correct. How to describe the problem mathematically (sample space, etc.)?
A:
There are two situations :
You guess the color really randomly, i.e. you guess it is green for $50$% of times and you guess it is red for the other $50$% of times.
You guess efficiently to reduce the number of wrong guesses. In this situation, you must check to is if you guess green for $70$% of times and guess red for $30$% of times it is better or if you guess the color that we have more.
For both situations we have :
P(Wrong guess) = P(Guess is Green & You pick Red) + P(Guess is Red & You pick Green)
As your guess and the pick are independent, the probability of each part is multiply of its parameters, i.e. :
$P(Wrong_{Guess}) = P(Guess_{Green})* P(Pick_{Red})+P(Guess_{Red})*P(Pick_{Green})$
So for 1st situation :
$P(Wrong_{Guess})= 1/2 * 0.3 + 1/2* 0.7 = 0.5$
And for 2nd situation :
$P(Wrong_{Guess})= 0.7 * 0.3+ 0.3*0.7= 0.42 $
Look, $42$ % is greater than $30$%. So the best way to minimum the wrong guesses is to say Green for all the times, And in this situation, $P(Wrong_{Guess}) = 0.30$
|
[
"stackoverflow",
"0058741331.txt"
] | Q:
Program to find sum and if entered 'y' then repeat the process using do while loop
Here a print message "Do you want to run again?" is asked, if we enter "y" then program will repeat.
I tried this code
#include<stdio.h>
int main()
{
int a, b, c;
char ch;
ch = 'y';
printf("enter 1st and 2nd no.");
scanf("%d%d", &a, &b);
{
c = a + b;
printf("%d", c);
printf("Do you want to run again?");
scanf("%s", &ch);
}
while(ch == 'y')
return 0;
}
A:
Use a do-while loop:
#include<stdio.h>
int main()
{
int a, b, c;
char ch;
do {
printf("enter 1st and 2nd number: ");
scanf("%d %d", &a, &b);
c = a + b;
printf("%d\n", c);
printf("Do you want to run again?: ");
scanf("%s",&ch);
}
while(ch=='y');
return 0;
}
|
[
"stackoverflow",
"0011639684.txt"
] | Q:
Is it appropriate to learn and use the ACM package for JAVA?
Right now I'm watching the Stanford's channel in youtube, and to be precise I'm watching the Java lectures by Professor Mehran Sahami. I already have some level of theoretical knowledge of Java but I find these lectures very interesting but there's one thing that confuses me and I want to clarify it before going any further.
In the examples there are a lot of differences from what I've seen so far in the books I've read and even in the original documentation of Sun. In these lectures the main method is as it seems public void run() insted of public static void main(String[] args). For console output he uses only println() instead System.out.println() and I suggest that going more deeply into the Java language there will be even more differencies from what I would call "the standard syntax."
From what I understand all this come from using the ACM package and I really don't know if keep watchng this will help me or just gonna confuse me more. Is this ACM package of some practical use? Does it make the Java syntax a lot more different than usual so I could end up with bunch of useless commands? Do you think it would be better to leave these videos for now and come back later when I can get use of the useful information and be more aware of the outdated stuff or the difference is not that big?
Thanks in advance
Leron
A:
I don't think it's going to get in the way of learning Java. It provides a framework within which your code will run, much as do Java applets, Swing, Android, or some other framework. It will not be the same as vanilla Java, but the ACM package is well documented. Once you master the basic concepts, learning the extra stuff you will need to wean yourself from the ACM package won't be hard at all.
A:
As long as you recognize the differences between the "shortcuts" provided in the ACM code and the standard syntax, I don't think there's any inherent harm in following these lectures. I've heard some decent things about the series, and if your goal is to learn the basics of the language (or fill in gaps in existing knowledge), then I think it's a fine resource.
On the other hand, I have never seen the ACM libraries used outside of an academic setting. Personally I only used them once, on a single project for a single (non-required) class while I was persuing my undergrad degree. If you're already familiar with the language, and know the basic concepts, I'd look for more standard tutorials that don't make use of esoteric or specialized code bases. For the most part, the ACM libraries seem to contain shortcuts and a standardized framework to aid in teaching (and learning) core concepts rather than worrying the exact syntax or any quirks that might be present in the language.
A:
If I want to summarise it, I would say that the ACM Java Libraries main goal is to make you free from the syntax to help you focus on the concept.
The ACM Java Library package is an excellent tool to introduce programming concepts to newbies. As you can see the lecture's title is Programming Methodology and not Programming Java.
Hope it clarifies
|
[
"gaming.stackexchange",
"0000242841.txt"
] | Q:
How many Vaults are in Fallout 4 and what were their purposes?
How many Vaults are in Fallout 4 and what were their purposes?
I know that Vault 111 was used to test the effect of cryogenics on unaware dwellers, but are there other Vaults in the wasteland, and if so, what were their purposes (or the experiments conducted in them)?
So far I've found 4 other vaults, Vault 114, 75, 81, and 95. I am having issues finding out what exactly Vault 95 was for (as the terminals there show no info), and am trouble finding out what else the others were for. Are there also additional vaults that I missed in this list?
A:
The Fallout wiki has an extensive list of known Vaults, as well as their purpose. Each vault description is backed by in-game material you are suppose to look for, but it does appear to confirm that there are no other vaults to be found as of yet.
Vault 75: Vault 75 is an experiment in improving the human genome. It appears these tests were carried out using selective breeding, genetic modification and hormonal treatment. The test subjects have their growth accelerated, and at 18, are "disposed of". It is suggested that in certain situations, such as for the purpose of replacing research staff, exceptional candidates may have been used for other means.
Vault 81: Vault 81 is a testing facility involving antibodies, disease and radiation. Vault 81 was under express orders to not evacuate under any circumstances, unless an official "all clear" was issued. In the event of an evacuation, it is suggested that the overseer would order the 'dweller section' to be mass-incinerated, including the dwellers.
Vault 95: Vault 95 is a social experiment on isolation and drug addiction. All dwellers were drug addicts, and the experiment started pre-war.
Vault 111: This is the vault our hero comes from, and was a test in cryogenics and long-term suspended animation.
Vault 114: Vault 114 was another social experiment, where the inhabitants consisted almost entirely of upper-class society. The living conditions of the vault was advertised as highly luxorous, but in reality, it was the exact opposite. The overseer was chosen from the general population, and the interview process favored qualities such as no leadership skill and issues with authority. The purpose is listed in a terminal; "By taking away the luxury and authority these groups saw in surface life, we hope to study their reactions in stressful situations."
|
[
"stackoverflow",
"0021786755.txt"
] | Q:
core-plot - detect touch on line between points of a CPTScatterPlot (iOS)?
I have a line graph made with CPTScatterPlot. I can detect touches on the plot points easily enough, but I want to also respond to touches on the line connecting the points.
Is there an easy way to do that?
I know I can use indexOfVisiblePointClosestToPlotAreaPoint to find the plot point closest to the user's touch. Converting to view coordinates and doing the same with the next (or previous) plot point, I can calculate whether or not the user's touch is on the line connecting those two points with something like:
(pt2.x - pt1.x)*(touchPoint.y - pt1.y) - (pt2.y - pt1.y)*(touchPoint.x - pt1.x)
where pt1 and pt2 are the view coords of the two plot points, and touchPoint is the point where the user touched somewhere between them (pt1.x <= touchPoint.x <= pt2.x).
This will work, but I'm thinking there must be an easier way - it seems as if Core Plot should be able to do this for me.
Is there an easier way to do this, or do I have to do it the hard way? If I have to do it the hard way, would this be something worth submitting as an enhancement to the core plot team? If so, I'll implement it in CPTScatterPlot and send in a patch. Otherwise I'll just do it in my own classes and handle it myself.
A:
Core Plot does not currently support this feature, although there is an open issue requesting it. If you come up with a solution you want to share, pull requests are welcome.
|
[
"stackoverflow",
"0033565519.txt"
] | Q:
python: if not this and not that
I want to write a condition which checks if both variables are False but I'm not sure if what I'm using is correct:
if not var1 and not var2:
#do something
or should it be:
if not (var1 and var2):
#do something
A:
This is called De Morgan's Law. (not A) and (not B) is equivalent to not (A or B), and (not A) or (not B) is equivalent to not (A and B).
Technically, it's very slightly faster to use the latter, e.g. not (A and B), as the interpreter evaluates fewer logical statements, but it's trivial. Use the logical structure that allows you to state the condition as clearly as possible, on a case-by-case basis.
|
[
"stackoverflow",
"0034871546.txt"
] | Q:
Issue with Perl Regex
new perl coder here.
When I copy and paste the text from a website into a text file and read from that file, my perl script works with no issues. When I use getstore to create a file from the website automatically which is what I want, the output is a bunch of |'s.
The text looks identical when I copy and paste, or download the text with getstore.. I'm unable to figure out the problem. Any help would be highly appreciated.
The output that I desire is as follows:
|www\.arkinsoftware\.in|www\.askmeaboutrotary\.com|www\.assculturaleincontri\.it|www\.asu\.msmu\.ru|www\.atousoft\.com|www\.aucoeurdelanature\.
enter code here
Here is the code I am using:
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
getstore("http://www.malwaredomainlist.com/hostslist/hosts.txt", "malhosts.txt");
open(my $input, "<", "malhosts.txt");
while (my $line = <$input>) {
chomp $line;
$line =~ s/.*\s+//;
$line =~ s/\./\\\./g;
print "$line\|";
}
A:
The bunch of | you get, is from the unfitting comment-lines at the beginning. So the solution is to ignore all "unfitting" lines.
So instead of
$line =~ s/.*\s+//;
use
next unless $line =~ s/^127.*\s+//;
so you would ignore every line except thos starting with 127.
|
[
"stackoverflow",
"0010022879.txt"
] | Q:
Database count column
I have two tables:
"sites" has_many "users"
"users" belongs_to "sites"
Is it better that whenever a users got added to sites I added column called users_count in sites table and increment it by one. Or is doing a conditional count on users table the best way?
A:
"Better" is a subjective term.
However, I'll be adamant about this. There should not be two sources of the same information in a database, simply because they may get out of step.
The definitive way to discover how many users belong to a site is to use count to count them.
Third normal form requires that every non-key attribute depends on the key, the whole key, and nothing but the key (so help me, Codd).
If you add a user count to sites, that does not depend solely on the sites key value, it also depends on information in other tables.
You can revert from third normal form for performance if you understand the implications and mitigate the possibility of inconsistent data (such as with triggers) but the vast majority of cases should remain 3NF.
|
[
"stackoverflow",
"0003874762.txt"
] | Q:
Problem POSTing an multidimensional array using cURL PHP
I'm having problems posting an array using PHP cURL.
I have successfully posted other values to the same page using POST-variables.
But this one is hard to figure out. The only problem is how I should present the data to the server.
I checked the original form using a form analyzer. And the form analyzer shows that the POST variables are sent like this:
array fundDistribution' =>
array
204891 => '20' (length=2)
354290 => '20' (length=2)
776401 => '20' (length=2)
834788 => '40' (length=2)
The values are just for showing an example. But they will be the same length.
My problem is that the responding server does not recognise the values when I send them like this:
Array(
[104786] => 20
[354290] => 20
[865063] => 20
[204891] => 20
[834788] => 20)
My question is: How do I send the data so the server understands it?
Thank you!
A:
function flatten_GP_array(array $var,$prefix = false){
$return = array();
foreach($var as $idx => $value){
if(is_scalar($value)){
if($prefix){
$return[$prefix.'['.$idx.']'] = $value;
} else {
$return[$idx] = $value;
}
} else {
$return = array_merge($return,flatten_GP_array($value,$prefix ? $prefix.'['.$idx.']' : $idx));
}
}
return $return;
}
//...
curl_setopt($ch, CURLOPT_POSTFIELDS,flatten_GP_array($array));
|
[
"stackoverflow",
"0024031015.txt"
] | Q:
XSLT copy over the node structure of parent within foreach
I have an input XML which needs to be transformed to output format given below.
The problem i am facing is copying the exact tag "IssuedList" to the output.As I have a for each loop its not allowing to apply identical transform.Is there any other method to achieve the same result.Given below is my XSLT which i tried out
Input XML:
<Books>
<Book>
<Id>1</Id>
<Name>ABC</Name>
<Categories RefId="1">
<Priority>High</Priority>
</Category>
</Categories>
<Categories RefId="2">
<Category>
<Priority>Low</Priority>
<Category>
</Categories>
<IssuedList>
<IssueList>
<Number>1</Number>
<Name>ABC</Name>
</IssueList>
<IssueList>
<Number>1</Number>
<Name>ABC</Name>
</IssueList>
</IssuedList>
</Book>
<Book>
<Id>2</Id>
<Name>DEF</Name>
<Categories RefId="1">
<Category>
<Priority>High</Priority>
</Category>
</Categories>
<Categories RefId="2">
<Category>
<Priority>Low</Priority>
</Category>
</Categories>
<IssuedList>
<IssueList>
<Number>1</Number>
<Name>DEF</Name>
</IssueList>
</IssuedList>
</Book>
And would like to see the transformed output as
<Books>
<Book>
<Id>1</Id>
<Name>ABC</Name>
<RefId>1</RefId>
<Priority>High</Priority>
<IssuedList>
<IssueList>
<Number>1</Number>
<Name>ABC</Name>
</IssueList>
<IssueList>
<Number>1</Number>
<Name>ABC</Name>
</IssueList>
</IssuedList>
</Book>
<Book>
<Id>1</Id>
<Name>ABC</Name>
<RefId>2</RefId>
<Priority>Low</Priority>
<IssuedList>
<IssueList>
<Number>1</Number>
<Name>ABC</Name>
</IssueList>
<IssueList>
<Number>1</Number>
<Name>ABC</Name>
</IssueList>
</IssuedList>
</Book>
<Book>
<Id>2</Id>
<Name>DEF</Name>
<RefId>1</RefId>
<Priority>High</Priority>
<IssuedList>
<IssueList>
<Number>1</Number>
<Name>DEF</Name>
</IssueList>
</IssuedList>
</Book>
<Book>
<Id>2</Id>
<Name>DEF</Name>
<RefId>2</RefId>
<Priority>Low</Priority>
<IssuedList>
<IssueList>
<Number>1</Number>
<Name>DEF</Name>
</IssueList>
</IssuedList>
</Book>
</Books>
I have written the XSLT as below
<xsl:for-each select ="/Books/Book">
<xsl:variable name="Id" select ="Id"></xsl:variable>
<xsl:variable name="Name" select ="Name"></xsl:variable>
<xsl:for-each select ="Categories/Category">
<Book>
<RefId>
<xsl:value-of select ="@RefId"></xsl:value-of>
</RefId>
<Id>
<xsl:value-of select ="$Id"></xsl:value-of>
</Id>
<Name>
<xsl:value-of select ="$Name"></xsl:value-of>
</Name>
</Book>
</xsl:for-each>
</xsl:for-each>
A:
I think you are making this much more complicated than it needs to be. Try:
XSLT 1.0
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:template match="/">
<Books>
<xsl:for-each select ="Books/Book/Category">
<Book>
<xsl:copy-of select ="../Id"/>
<xsl:copy-of select ="../Name"/>
<RefId><xsl:value-of select ="@RefId"/></RefId>
<xsl:copy-of select ="Priority"/>
<xsl:copy-of select ="../IssuedList"/>
</Book>
</xsl:for-each>
</Books>
</xsl:template>
</xsl:stylesheet>
|
[
"ell.stackexchange",
"0000177597.txt"
] | Q:
"Whoever he is", is it polite?
Is it polite to refer to someone we don't know as whoever he is?
For example:
"Jim" is a professional photographer, whoever he is.
I feel it's a little bit disregarding to the person. It sounds not polite to me.
A:
The expression is not only a little impolite in this particular case but a little strange.
You've already identified the name of the person as well as his profession. Therefore, whoever he his would typically be out of place.
But having said that, the use of the sarcastic quotation marks (or scare quotes) around Jim's name actually makes the fact that it's an impolite statement almost expected—and perhaps not out of place at all.
Because of the scare quotes, the sentence is starting off sarcastically anyway. An inference can be made that the speaker is suspicious of the person's actual identity; there could be some reason to believe that Jim is just a fake name. And "whoever he is" just reinforces that suspicion.
Discounting the scare quotes, we normally only use whoever when we know nothing at all, or extremely little, about someone:
Whoever it was, they stole my car.
But if I know that my car was stolen by the professional photographer Jim, I would not be saying "whoever" in the same sentence.
Rather than in a sense of "mystery identification," whoever can also be used a general sense:
Whoever stops the gas leak, they had better do it soon.
Here, it doesn't matter who the person is, just that it's someone.
So, I would never expect to hear the sentence in your question (if the scare quotes weren't there).
A variation, however, is common:
Jim is a professional photographer, whatever else he is.
Here, we know he's a photographer and it's an important piece of information amongst all of the other things that he might also be.
|
[
"gis.stackexchange",
"0000033817.txt"
] | Q:
Is there a way to create a Personal Geodatabase in QGIS?
Is there a way to create a Personal Geodatabase in QGIS? I know QGIS can view Personal Geodatabases, but can one be created? I have a project where the client is requiring a small spatial database (they are very low on the tech level, so I do not want to jump into PostGIS etc etc). They also only use ArcGIS. I would like to do the project in QGIS, hence my question.
A:
According to the GDAL docs:
OGR optionally supports reading ESRI Personal GeoDatabase .mdb files
via ODBC. Personal GeoDatabase is a Microsoft Access database with a
set of tables defined by ESRI for holding geodatabase metadata, and
with geometry for features held in a BLOB column in a custom format
(essentially Shapefile geometry fragments). This drivers accesses the
personal geodatabase via ODBC but does not depend on any ESRI
middle-ware.
Writing a personal geodatabase on the other hand, is another story. Access mdb is a proprietary file format and open source projects like QGIS tend to stay away from them. I did see an old open source project called MDB Tools which aims to
MDB Tools is an open source suite of libraries and utilities to read
(and soon write) MDB database files.
I'm not sure how far along their goal that project is though. So no, I don't think you can create personal geodatabase files using QGIS as of now or any time soon.
As for your client's request, @Ragi has just finished writing an ArcGIS plugin that let's you use OGR sources, this includes Spatialite and PostGIS. That way you can use ArcGIS and PostGIS ( or Spatialite, whichever you prefer ). You might want to give it a try.
I hope that helps.
A:
QGIS uses the OGR library for the majority of it's GIS format access. The Personal GDB access falls under this grouping.
Based on the OGR Vector Format's page here: OGR Vector Formats
The ESRI Personal GeoDatabase driver does not have Creation support.
|
[
"stackoverflow",
"0044294915.txt"
] | Q:
Java doesn't iterate through all files in a big directory
I'm doing some data mining for the first time using the enron email dataset.
I'm trying to iterate through every file in a directory and parse into a csv file the date, time and addressor of every file.
The problem is that java doesn't seem to iterate through all of them, which is why my csv file is around 1000 lines too short. How can I solve this?
My code:
public class FileReader {
public static void main(String[] args) throws FileNotFoundException{
FileReader fileReader = new FileReader();
//fileReader.mainFunction("maildir/skilling-j/_sent_mail");
fileReader.mainFunction("maildir/skilling-j/inbox");
/*fileReader.mainFunction("maildir/skilling-j/sent");
fileReader.mainFunction("maildir/lay-k/inbox");
fileReader.mainFunction("maildir/lay-k/_sent");
fileReader.mainFunction("maildir/lay-k/sent");*/
System.out.println("done!");
}
public void mainFunction(String fileName) throws FileNotFoundException{
File maindir = new File(fileName);
PrintWriter pw = new PrintWriter(new File("Analysis.csv"));
StringBuilder sb = new StringBuilder();
StringBuilder sbpre = new StringBuilder();
Scanner scanner;
sbpre.append("Date");
sbpre.append(',');
sbpre.append("Time");
sbpre.append(",");
sbpre.append("From");
sbpre.append('\n');
int endcounter = 0;
pw.write(sbpre.toString());
File [] files = maindir.listFiles();
for(int i = 0; i < files.length; i++){
scanner = new Scanner(files[i]);
System.out.println(files[i].getPath());
while (scanner.hasNextLine()) {
String lineFromFile = scanner.nextLine();
String month = "Jun";
String year = "2000";
String time = "00:00:00";
if(lineFromFile.contains("Date:") & (lineFromFile.length()== 43 | lineFromFile.length()== 42 )){
if(lineFromFile.length()==43){
sb.append(lineFromFile.substring(11,13));
month = lineFromFile.substring(14, 17);
year = lineFromFile.substring(18,22);
time = lineFromFile.substring(23,30);
}else{
sb.append("0");
sb.append(lineFromFile.substring(11,12));
month = lineFromFile.substring(13, 16);
year = lineFromFile.substring(17,21);
time = lineFromFile.substring(22,29);
}
sb.append(".");
switch(month){
case "Jan":sb.append("01"); sb.append(".");break;
case "Feb":sb.append("02"); sb.append(".");break;
case "Mar":sb.append("03"); sb.append(".");break;
case "Apr":sb.append("04"); sb.append(".");break;
case "May":sb.append("05"); sb.append(".");break;
case "Jun":sb.append("06"); sb.append(".");break;
case "Jul":sb.append("07"); sb.append(".");break;
case "Aug":sb.append("08"); sb.append(".");break;
case "Sep":sb.append("09"); sb.append(".");break;
case "Oct":sb.append("10"); sb.append(".");break;
case "Nov":sb.append("11"); sb.append(".");break;
case "Dec":sb.append("12"); sb.append(".");break;
}
sb.append(year);
sb.append(",");
sb.append(time);
sb.append(",");
}
if(lineFromFile.contains("X-From:")) {
lineFromFile = lineFromFile.replace(",", " ");
sb.append(lineFromFile.substring(8));
}
pw.write(sb.toString());
sb.setLength(0);
}
sb.append('\n');
endcounter = i;
}
pw.close();
System.out.println(endcounter);
}
}
Console log last lines:
maildir\skilling-j\inbox\997_
maildir\skilling-j\inbox\998_
maildir\skilling-j\inbox\999_
maildir\skilling-j\inbox\99_
maildir\skilling-j\inbox\9_
1251
done!
It should be actually around 2500 lines.
Also would be nice to know how I can iterate through a directory with directories (eg "maildir/skilling-j") instead of a single directory with files.
And I know that the code is kind of bloated but that's the result of an incompetent coder (me).
A:
listFiles() method returns list of files and directories. You could use methods isFile(), isDirectory() to identify type of file. Try this simple code to verify files in your folder:
File[] files = maindir.listFiles();
System.out.println("Files count: " + files.length);
for (int i = 0; i < files.length; i++) {
System.out.print(files[i].getAbsolutePath());
if (files[i].isDirectory()) {
System.out.println(" dir");
} else if (files[i].isFile()) {
System.out.println(" file");
}
}
You could use isDirectory() method to filter only directories and iterate throw them.
|
[
"gaming.stackexchange",
"0000122689.txt"
] | Q:
What effect do Admirals/Generals/Commanders have?
I know that Admirals make your fleets better, Generals make your troops better (including detection), and Commanders make your starfighters better, but these are very vague statements.
What specifically do they affect and by how much?
Does Leadership factor in at all?
A:
From a Gamefaqs walkthrough
Officers: Certain Characters can be different types of officers. You
can find out which ones a character can be in their status menu.
Basically, Officers enhance military units. Admirals enhance fleets,
Generals enhance troops, and Commanders enhance fighters. Admirals
and Commanders make their units faster and more responsive in
Tactical Mode and Generals make their troops much stronger and more
effective.
Basically, except for Generals, I don't notice Officers
boosting your units combat effectiveness that much. Their main
strength lies in their ability to drastically increase the detection
ratings of their units. This can greatly increase your defenses
against enemy covert missions. So, it would be a good idea to post an
officer on your more important planets to protect them.
Even when you have an idle character and just don't have anything for him/her to
do, you can always make them an officer for added defense in your
territory, that way they're not being wasted.
|
[
"cs.stackexchange",
"0000070212.txt"
] | Q:
Rice's Theorem for Total Computable Functions
Fix a Gödel numbering, and write $\phi_n$ for the function coded by $n$. Rice's theorem states that if $P$ is the set of partial computable functions, and $A \subseteq P$, then the decision problem
Given $n$, does $\phi_n \in A$?
is decidable if and only if $A = \emptyset$ or $A = F$; that is, if the decision problem always has the same answer.
Now consider the set $T$ of total computable functions instead. Clearly, the problem
Given $n$ with $\phi_n \in T$, does $\phi_n \in A$?
is no longer decidable if and only if $A \in \{\emptyset,T\}$. In fact, for recursive set $M\subseteq \mathbb N^k$, the problem is also decidable for
$$
A_M = \{\phi_n \in T \mid (\phi_n(0),\ldots,\phi_n(k-1)) \in M\}.
$$
So is it true that the problem is decidable if and only if $A$ is of the form $A_M$ as above? If not, can we give a different restriction on the form of $A$ that does give us a theorem?
A:
Almost
The correct answer is that a property of recursive languages is r.e. if and only if it can be verified by a finite number of values (though unlike in your example the exact number of values can be unbounded, so $k$ can depend on $n$). In fact, the wikipedia page for Rice's theorem has a section on this:
...the analogue says that if one can effectively determine for every recursive set whether it has a certain property, then only finitely many integers determine whether a recursive set has the property.
More precisely, a property is r.e. iff there is a r.e. set $T_1$ with the prefix property (i.e.- no string in $T_1$ is a prefix of another) such that $\phi_n$ has the property iff there is some $k$ such that the sequence $\phi_n(0), ..., \phi_n(k-1)$ is in $T_1$ (we can make sense of a sequence being in a set of natural numbers by encoding this as $p_1^{\phi_n(0)}\cdot p_2^{\phi_n(0)}\cdot ...\cdot p_{k}^{\phi_n(k-1)}$ where $p_i$ is the $i$th prime). A property would be recursive iff it is r.e. and co-r.e.
This pretty much says that we can get no more information out of the index $n$ than the function $\phi_n$ itself, because any computable property must be computable only using calls to the function $\phi_n$. This may sound trivial, but it is no means obvious since a priori the input $n$ might have some extra non-syntactic information computably nestled into it.
Rice's theorem essentially states that no extra non-syntactic information about partial recursive functions can be computably nestled into their Godel encodings. As it turns out this is also equivalent to the above characterization, so the same thing holds for total functions as well. I suppose one could make this statement even stronger by generalizing to even more general classes of functions, but I do not know if this is true.
|
[
"tex.stackexchange",
"0000026318.txt"
] | Q:
Disabling URLs in bibliography
I use Mendeley for article management and export the related items to a bib file for referencing in LaTeX documents. I use IEEEtran style and see that the bibliography items include URLs which I don't want to include. The URLs may have URLs like this:
Available: http://www.mendeley.com/research/improved-adaptive-background-mixture-model-realtime-tracking-shadow-detection-6/
As a solution, I can delete the URL in Mendeley and export it again but I want the URLs remain. I only want them to be hidden in the references. Is there a command to disable URLs in bibliography?
P.S.: I'm not interested in typesetting the URLS as given in this question.
Additional information: I've used the following code for the bibliography:
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,references}
There's a file named references.bib in the working folder.
A:
If you use biblatex, there's an option called url which can be set to url = false. There are also isbn, doi etc., similar options. If you are not using biblatex. I don't think there's an easy way get what you want. The traditional bibtex uses a very different language to define the bib style.
A:
I guess you use the IEEEtran bibliography style coming along with the IEEEtran document class. You can easily adapt this style to ignore any url fields in your bibliographic database. To this end, copy the file IEEEtran.bst to your working directory (if it isn't already there) and apply the following patch:
--- IEEEtran.bst.orig
+++ IEEEtran.bst
@@ -403,7 +403,6 @@
default.ALTinterwordstretchfactor 'ALTinterwordstretchfactor :=
default.name.format.string 'name.format.string :=
default.name.latex.cmd 'name.latex.cmd :=
- default.name.url.prefix 'name.url.prefix :=
}
@@ -1080,7 +1079,7 @@
if$
"\begin{thebibliography}{" longest.label * "}" *
write$ newline$
- "\providecommand{\url}[1]{#1}"
+ "\def\url#1{}"
write$ newline$
"\csname url@samestyle\endcsname"
write$ newline$
A:
I have a cheeky solution to this. I grep "url" in my bibtex file with the invert switch -v -- in effect, it gives me a new bibtex file without any url data. In other words,
grep -v "url =" file.bib > newfile.bib
|
[
"math.stackexchange",
"0003285697.txt"
] | Q:
How can I find a matrix from the multiply of this matrix and it's transpose
I have a matrix $B$ which it's dimension is $nm$ (with $n>m$). During an iterative process I'll change it to get a desired state of matrix $B$, but in each step, I should check a constraint that $\text{Trace}(B^TB)$ must be equal to an integer number $M$. Here $B^T$ is the transpose of $B$.
To meet the constraint, in each step, after getting new state of $B$, I do blow semi code to keep the trace of $B^TB$ equal to the given constant $M$.
$$x = B^TB$$
$$\hat{x} = \frac{M}{\text{Trace}(x)}x$$
actually $\hat{x}$ meets the constraint and now the Trace$(B^TB)$ is equal to $M$. But how can I find the matrix $B$ from $\hat{x}$ or how can I calculate $B$ from multiply of this matrix and it's transpose.
A:
You calculate $x$ as above, call $a = \sqrt{\frac{M}{\text{Trace}(x)}}$. The new $B$ is $aB$.
|
[
"stackoverflow",
"0044088297.txt"
] | Q:
Format date with angular2 moment
I'm getting following error, when using amTimeAgo pipe from angular2-moment.
Deprecation warning: value provided is not in a recognized RFC2822 or ISO format.
moment construction falls back to js Date(), which is not reliable across all browsers and versions.
Non RFC2822/ISO date formats are discouraged and will be removed in an upcoming major release.
Please refer to http://momentjs.com/guides/#/warnings/js-date/ for more info.
Arguments:
[0] _isAMomentObject: true, _isUTC: false, _useUTC: false, _l: undefined, _i: 21-03-2017, _f: undefined, _strict: undefined, _locale: [object Object]
Also pipe is printing Invalid date.
I'm using it like this:
<span class="date-created"> {{ job.createdAt | amTimeAgo }} </span>
And value of job.createdAt is string in format: 22-03-2017.
I understand that something is wrong with format, but don't know how to pass that custom format ('DD-MM-YYYY') to pipe, so that moment package and this angular library can recognize it.
Any ideas?
A:
What about creating a new moment object to pass it into the pipe, like:
let newMomentObj = moment(job.createdAt, 'DD-MM-YYYY');
and in your html file:
<span class="date-created"> {{ newMomentObj | amTimeAgo }} </span>
|
[
"stackoverflow",
"0055198136.txt"
] | Q:
Slice from sections from URL
Given these URLs:
/content/etc/en/hotels/couver/offers/residents.html
/content/etc/en/offers/purchase.html
I want to remove (slice) from the URL and get only /offers/residents and /offers/purchase
I wrote this code to do it but I'm getting different results than I require. Please let me know which syntax will work as expected.
var test1 = '/content/etc/en/hotels/couver/offers/residents.html'
test1 = test1.slice(0,5);
var test2 = '/content/etc/en/offers/purchase.html'
test2 = test2.slice(0,5);
A:
One way to achieve this would be to split the strings by / and then only use the last two sections of the path to rebuild a string:
['/content/etc/en/hotels/couver/offers/residents.html', '/content/etc/en/offers/purchase.html'].forEach(function(url) {
var locs = url.replace(/\.\w+$/, '').split('/');
var output = locs.slice(-2).join('/');
console.log(output);
});
Alternatively you could use a regular expression to only retrieve the parts you require:
['/content/etc/en/hotels/couver/offers/residents.html', '/content/etc/en/offers/purchase.html'].forEach(function(url) {
var locs = url.replace(/.+\/(\w+\/\w+)\.\w+$/, '$1');
console.log(locs);
});
|
[
"stackoverflow",
"0013920214.txt"
] | Q:
Populate date cells based on one cell's value?
I would like the ability for the user to enter a year (e.g. "2013") in cell D1 and press a button that fires off a macro. This macro will automatically assign a function to cells D2-O2 (one cell for each month in the year) that converts these cells to actual date types.
For instance, cell D2's value would be =DATE(2013, 1, 1), signifying that this cell represents January 1st of 2013. Similarly, cell E2's value would be =DATE(2013, 2, 1), F2's value would be =DATE(2013, 3, 1), etc.
The following is my pseudo code, could you please help me convert this to actual VBA?
var myYear = the value of cell D1
cell D2 value is =DATE(2013,1,1)
cell E2 value is =DATE(2013,2,1)
cell F2 value is =DATE(2013,3,1)
cell G2 value is =DATE(2013,4,1)
cell H2 value is =DATE(2013,5,1)
cell I2 value is =DATE(2013,6,1)
cell J2 value is =DATE(2013,7,1)
cell K2 value is =DATE(2013,8,1)
cell L2 value is =DATE(2013,9,1)
cell M2 value is =DATE(2013,10,1)
cell N2 value is =DATE(2013,11,1)
cell O2 value is =DATE(2013,12,1)
Thanks
A:
Try this:
Sub FillYear()
YearNum = Cells(1, 4) ' Cell D1
For MonthNum = 1 To 12
Cells(2, MonthNum + 3).Value = DateSerial(YearNum, MonthNum, 1)
Next
End Sub
UPDATE:
If you want the date values to be a formula, so that the dates change if the user changes the year, change the line inside the For Loop to be:
Cells(2, MonthNum + 3).Formula = "=Date(R1C4," & MonthNum & ", 1)"
|
[
"physics.stackexchange",
"0000481463.txt"
] | Q:
Difference in Sound and Heat
What makes sound and heat distinct ?
The answer is said to lie in the fact that the former consists of vibration in an ordered fashion while the latter is not.
But why would ordered vibration not be heat, when heat is the just the 'jingling' of molecules (ordered or disordered) ?
Shouldn't it be in this way : 'All sounds are heat while all heat is not sound' ?
A:
What makes sound and heat distinct ?
What makes them distinct is that sound is the transport of mechanical energy from one place to another in the form of mechanical longitudinal waves, whereas heat is defined as the transfer of energy from one substance to another due to temperature difference. Further explanation follows.
Sound is a mechanical wave associated with the vibration of some medium (some form of matter (solid, liquid or gas)). It is a longitudinal wave in which the displacement of the medium is parallel to the propagation of the wave. Google up longitudinal waves and check out the explanation of their association with sound on the Hyperphysics website.
Heat is energy transfer between substances due solely to a temperature difference between the substances. It is not the "jiggling of molecules" of a substance. That jiggling is the translational, rotational, and/or vibrational kinetic energy of the molecules of the substance and is considered to be part of the internal energy of the substance (The kinetic energy component. There is also a potential energy component).
The confusion regarding heat is that although it is not the jiggling of the molecules themselves, it is the mechanism for causing the kinetic energy of the jiggling of the molecules of a higher temperature body (where the jiggling is faster) to transfer to a lower temperature body (where the jiggling is slower). This transfer can occur by conduction, convection, and/or radiation. The first two require a medium (solid, liquid or gas) where the molecules of the hot and cold bodies interact with one another so that the higher kinetic energy molecules gives up some of their kinetic energy to the lower kinetic energy molecules. Heat transfer by radiation does not because it is in the form of electromagnetic radiation (transverse waves) capable of traveling through a vacuum with no physical interaction between the hot and cold bodies.
Shouldn't it be in this way : 'All sounds are heat while all heat is not sound' ?
As already stated, sound and heat are distinctly different, so neither is a form or subset of the other. Sound is a form of mechanical energy. Heat is the transfer of thermal energy due to temperature difference. But there is a connection between heat and sound.
Since sound involves molecular vibrations, to the extent that the molecules “rub” against one another there is some friction involved. Friction causes the local temperature of the medium to rise above its surroundings. That can cause heat transfer from regions of higher temperature to regions of lower temperature. Such friction also occurs when materials absorb sound. So in that sense, sound can produce heat. However the energy levels of sound are so small, the increase in temperature and any heat transfer would, I believe, be very small.
I am unaware of any mechanism by which heat transfer can produce sound. Perhaps another contributor is aware of such a mechanism and would like to comment.
Hope this helps.
|
[
"stackoverflow",
"0009142841.txt"
] | Q:
android Application wallpaper set
I m developing an Android chat application where in user will be provided with a set of wallpapers from which he/she can set wallpaper of application window.It is changing wallpaper.How it can be done through code? please help.
I don't want code for system/home screen wallpaper setting.Wanted for chat screen wallpaper changing code
A:
Simply use setBackgroundDrawable(Drawable d) in your main activity that is showing the chat window.
|
[
"stackoverflow",
"0062232329.txt"
] | Q:
How to POST REST API Messages from oozie workflows
I have an workflow which looks like below.
start = fork1
<fork1/>
<action1>
<action2>
<fork1>
<join1 to fork2>
<fork2/>
<action3>
<action4>
<fork2>..
....
....
<join 75 to fork 76>
<fork76>
<action 987>
<action 988>
<fork76/>
<join 76 to "END">
Each action has 2 end nodes.
I would like to modify the same in below way.
<OK > to post a "SUCCESS" message to REST endpoint and then to proceed to next_join_number.
<ERROR> to post "FAILURE" message to REST endpoint and then to proceed email & kill action.
But I am unsure how to make this as generic & acheive it .Only way I can think of is to write 988 separate actions to send status messages and appending to action.
A:
Create a sub-workflow for each action.
Each action (lets say Spark) will have a separate workflow. And in that you will have 2 extra action (probably a Shell action).
<workflow-app name="spark-subworkflow" xmlns="uri:oozie:workflow:0.4">
... # configs
<start to="special-spark"/>
<action name="special-spark">
<spark>
...
</spark>
<ok to="send-success"/>
<error to="send-failure"/>
</action>
<action name="send-success">
<shell>
<job-tracker>[JOB-TRACKER]</job-tracker>
<name-node>[NAME-NODE]</name-node>
<exec>script-to-run.sh</exec>
<env-var>MESSAGE_TO_SEND=SUCCESS</env-var>
<file>hdfs:///path-to-script/your-rest-script.sh#script-to-run.sh</file>
</shell>
<ok to="end"/>
<error to="end"/>
</action>
<action name="send-failure">
<shell>
<job-tracker>[JOB-TRACKER]</job-tracker>
<name-node>[NAME-NODE]</name-node>
<exec>script-to-run.sh</exec>
<env-var>MESSAGE_TO_SEND=FAILURE</env-var>
<file>hdfs:///path-to-script/your-rest-script.sh#script-to-run.sh</file>
</shell>
<ok to="kill"/>
<error to="kill"/>
</action>
</workflow-app>
Kind of this way you need to replace your each action. Parameterise the sub-workflow so that it can be reused for same type of action.
Notice that I have created 2 action, one from success and one for failure. Thats because if the action which is sending the status failed you want your workflow to continue. So for error/no-error of send-success action your workflow should continue; same for send-failure, it will kill the sub-workflow.
I tried to achieve it using Decision Node. But no luck. So only option to create 2 separate action. Even thought you can use same script your-rest-script.sh as MESSAGE_TO_SEND is the parameter for both the action. Using java/python-shell action the flow would be same.
|
[
"stackoverflow",
"0036431852.txt"
] | Q:
Pasleyjs Change default behavior of button in parsley form validation
I am using parsleyjs for form validation. I have two buttons 'save' and 'cancel'. I want to use save button for submitting the form, and for cancel button I do not want to submit form. Currently when I click any of them, they take me to form submission
<form id="form_validation">
<div>
<input type="text" required/>
</div>
<div>
<button type="submit">Save</button>
<button type="submit">Cancel</button>
</div>
</form>
<script>
var $formValidate = $('#form_validation');
$formValidate.parsley().on('form:submit', function () {
//this code is called when I click save or cancel button
});
</script>
A:
If you 'cancel' button isn't to submit the form, then it shouldn't have a type="submit". Problem solved.
|
[
"stackoverflow",
"0047115447.txt"
] | Q:
How do I change the cardview position to the bottom of the tablayout?
I tried to create a cardview consisting of five items in a fragment. However, the cardview position is always at the top of the layout. I tried using margin and align, but it didn't work.
How do I change the position of the cardview?
Here is my card_item.xml:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:android.support.v7.cardview="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="85dp"
android:orientation="vertical"
android:id="@+id/my_relative_layout"
>
<android.support.v7.widget.CardView
android:id="@+id/card_view"
xmlns:card_view="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="150dp"
android:layout_marginRight="10dp"
android:layout_marginLeft="10dp"
android:layout_marginBottom="8dp"
android:backgroundTint="#E3F0F5"
android:layout_centerVertical="true"
card_view:cardCornerRadius="5dp"
card_view:elevation="14dp"
>
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:layout_width="50dp"
android:layout_height="50dp"
android:id="@+id/iv_image"
android:src="@drawable/avatar1"
android:layout_centerVertical="true"
android:layout_marginLeft="15dp"
android:layout_marginRight="10dp"
/>
<TextView
android:textColor="#95989A"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/tv_text"
android:layout_toRightOf="@+id/iv_image"
android:gravity="center"/>
<TextView
android:textColor="#95989A"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/tv_blah"
android:text="Another RV Fragment"
android:layout_below="@+id/tv_text"
android:layout_toRightOf="@+id/iv_image"
android:layout_toEndOf="@+id/iv_image"/>
</RelativeLayout>
</android.support.v7.widget.CardView>
</RelativeLayout>
Here is my activity_pendaftar.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="com.martin.app.donorkuyadmin.PendaftarActivity"
android:background="@drawable/background4">
<android.support.design.widget.AppBarLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<include
android:layout_height="wrap_content"
android:layout_width="match_parent"
layout="@layout/toolbar_pendaftar">
</include>
<android.support.design.widget.TabLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/tablayout"
app:tabMode="fixed"
app:tabGravity="fill">
</android.support.design.widget.TabLayout>
</android.support.design.widget.AppBarLayout>
<android.support.v4.view.ViewPager
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/viewpager">
</android.support.v4.view.ViewPager>
</RelativeLayout>
This is my screenshot:
A:
Give an id to your AppBarLayout (say, id=action_bar) and then simply add layout_below="@id/action_bar" to your ViewPager:
<android.support.v4.view.ViewPager
android:layout_below="@id/action_bar"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/viewpager">
</android.support.v4.view.ViewPager>
|
[
"gamedev.stackexchange",
"0000137492.txt"
] | Q:
Loading player position from file but player object not at right position sometimes
In my current project I am developing a game saving system and I have encountered a problem where in some cases when I save the game and re-load it the player will not be loaded in at the correct position.
I've printed the position that is loaded in from file to the console to see if it's being changed and it's not.
Is there anything i'm not seeing?
For reference here's the two methods I use when loading the game:
public void LoadData() {
if (File.Exists(filename)) {
byte[] soupBackIn = File.ReadAllBytes(filename);
string jsonFromFile = encryption.Decrypt(soupBackIn, JSON_ENCRYPTION_KEY);
copy = JsonUtility.FromJson<SaveData>(jsonFromFile);
print(copy.playerPosition);
DataToLoad();
}
}
private void DataToLoad() {
player.transform.position = copy.playerPosition;
player.transform.rotation = copy.playerRot;
player.playerHealth = copy.playerHealth;
player.agility = copy.agility;
player.attack = copy.attack;
player.defense = copy.defense;
player.strength = copy.strength;
for (int i = 0; i < copy.inventory.Count; i++) {
inv.AddItem(copy.inventory[i].id);
}
}
A:
I had a NavMeshAgent attached to my player from a previous prototype of a movement type for the game and it was interfering with where the player was positioned.
|
[
"math.stackexchange",
"0000687301.txt"
] | Q:
prove $v_i=(1^i,2^i,\dots,n^i)$, $i=0,\dots,n-1$ a basis for real $n$-space
The problem goes: or i $\in $ $[0,n-1]$, $v_{i}\in \mathbb{R}^{n}$ is defined by $v_{i} = (1^{i},2^{i},...,n^{i})$. Prove that the list $(v_{0},v_{1},...,v_{n-1})$ is a basis for $\mathbb{R}^{n}$.
Since the list has a length equal to the dimension of $\mathbb{R}^{n}$, my approach is to prove that the list is linearly independent, i.e. the linear combination $a_{0}v_{0}+a_{1}v_{1}+...+a_{n-1}v_{n-1}$ of the list is zero iff $a_{0}=a_{1}=...+a_{n-1}=0$. The backward direction is obvious but I am stuck on the forward direction. To prove that I can say since $v_{0},v_{1},...,v_{n-1}\neq 0$ as defined the proof is complete. But I doubt the validity of this method since it seems way too easy.
Can anyone give me some advice on this? Thanks a lot.
A:
So let's call $V$ the matrix constructed with the $v_i$ as columns.
Suppose there exists $X = (x_0,x_1,...,x_{n-1})$ in $\mathbb{R}^n$ such that $VX = 0$.
Then : \begin{cases}
x_0 + 1 x_1 + 1^2 x_2 + &\dots + 1^{n-1} x_{n-1}=0\\
x_0 + 2 x_1 + 2^2 x_2 + &\dots + 2^{n-1} x_{n-1}=0\\
& \dots \\
x_0 + n x_1 + n^2 x_2 + &\dots + n^{n-1} x_{n-1}=0
\end{cases}
Finally, let's consider the polynomial $P(Y)=\sum_{i=0}^{n-1} x_i Y^i$.
We can see that $1,2,...,n-1,n$ are roots of P so P has at least $n$ dinstincts roots. Therefore, $P=0$, so $X=0$, which gives you linear independance.
(Props to Wikipedia France for the proof).
|
[
"stackoverflow",
"0003128977.txt"
] | Q:
Static variables in IIS-hosted web applications
If I declare a static field in a type instantiated within an ASP.NET application, hosted within IIS, is the same variable (i.e. same memory location) used by all of the worker threads used by IIS, opening up concurrency issues?
A:
Yes. Static variables are shared across an entire AppDomain, which means all worker threads that live in that AppDomain share the same "instance" of that variable.
Static variables are generally a poor choice for highly concurrent applications, like web apps. Depending on your specific scenario, consider session variables.
|
[
"math.stackexchange",
"0002376276.txt"
] | Q:
Limit of $t^{p - 1}$ as $t$ approaches infinity
If $1 > p - 1 > 0$, such as $0.5$, then $t^{p - 1}$ wouldn’t approach infinity as $t$ approaches infinity. Isn't that right? Doesn't this fact make the solution slightly off?
A:
$1 > p-1 > 0$ is the same as $2 > p > 1$ for which this integral $\int t^{-p}dt$ converges. In fact it converges for all $p > 1$ as the solution states.
In response to your comment: $t^n \to \infty$ as $t\to\infty$ if $n>0$, which includes things such as $n=\frac 1 2 \text{ or } 0.1$ etc.
|
[
"math.stackexchange",
"0002907725.txt"
] | Q:
No. of solutions of $f(x)=f'(x)$?
Let $f:[0,1] \to \Bbb R$ be a fixed continuous function such that $f$ is differentiable on $(0,1)$ and $f(0)=f(1)$. Then the equation $f(x)=f'(x)$ admits
No solution $x \in (0,1)$
More than one solution $x \in (0,1)$
Exactly one solution $x \in (0,1)$
At least one solution $x \in (0,1)$
As I have tried taking $f(x)=0$ on $[0,1]$ ruled out options 1 and 2 and by Rolle's Theorem there exists $c\in (0,1)$ such that $f'(c)=0$. Then I thought to construct function $g(x)=f(x)-f'(x)$ to check zeros but I'm stuck because $f'(x)$ need to be continuous.
Can anyone give some hint to proceed further?
A:
I don't think you can guarantee any of the choices. Below are three possibilities for $f(x)$ which have
$0$ solutions, 2. $1$ solution, 3. $\infty$ solutions.
$f(x)=4$
$f(x)=(x-\frac12)^2$
$f(x)=0$
|
[
"french.stackexchange",
"0000020188.txt"
] | Q:
Comment puis-je utiliser « avance » pour dénoter « tôt »?
Ma professeur m'a dit qu'on pouvait utiliser le mot « avance » (e.g. « je suis arrivé en avance ») pour dénoter le mot « tôt ». Comment puis-je le faire ? Et est-ce que cet usage est commun ?
Si possible, répondez en anglais, s'il vous plait.
A:
Yes, "en avance" is a very frequent idiom of the French language. You can use "en avance" with the verbs 'être', 'venir', 'aller', 'arriver', and so on.
"Je suis en avance", "Il est arrivé en avance", "Elle est allée en avance à son rendez-vous".
"En avance" means 'earlier than a certain time reference' which could be explicit or implicit. It is the opposite of "en retard". It could mean something like 'trop tôt' or 'plus tôt que prévu'.
Explicit: "Je suis arrivé en avance à la réunion" = I came earlier than the scheduled time for the meeting.
Implicit : "Je préfére venir en avance qu'en retard" = "Je préfère venir trop tôt que trop tard" = I prefer to come too early than too late.
You cannot use 'tôt' with the verb 'être', but you can do it with 'venir', 'aller', 'arriver', 'partir', and so on. So these are equivalent to the sentences above:
"Il est arrivé plus tôt que prévu", "Elle est allée trop tôt à son rendez-vous"
|
[
"stackoverflow",
"0059991670.txt"
] | Q:
Using user defined math functions when compiled with a macro, or simply use the function from a standard math library in c++
I have my own implementation of a f_sin(x) function,(analogous implementation to sin(x) in ) that I want to use when compiled with a macro named MYMATH. If the MYMATH is not defined I want to use the function sin(x) from the math.h
Any leads on how to go about?
Note I cannot change anything in the function definitions of f_sin(x) or sin(x).
A:
You can do it like this:
double sin_wrapper(double x) {
#ifdef MYMATH
return f_sin(x);
#else
return std::sin(x);
#endif
}
and then replace all calls to sin with calls to this wrapper.
A:
You can try using a macro for each function and then just define it depending on your macro MYMATH. Also if you prefer avoiding this kind of macros you can use a generic lambda as a wrapper.
MyMath.hpp
1.- With Macros for each function
#ifdef MYMATH
#define imp_sin(x) f_sin(x)
#else
#include <cmath>
#define imp_sin(x) std::sin(x)
#endif
2. With generic lambda (C++ 14)
#define glambda(x) [](auto y){ return x(y); }
#ifdef MYMATH
auto imp_sin = glambda(f_sin);
#else
#include <cmath>
auto imp_sin = glambda(std::sin);
#endif
#undef glambda //Or not if you want to keep this helper
Usage main.cpp
#include "MyMath.hpp"
int main(int, char**) {
imp_sin(3.4f);
return 0;
}
|
[
"stackoverflow",
"0061564194.txt"
] | Q:
Submit button of the form in flask is not working
Submit button of the form is not working. After submitting the form,it should redirect to the next page but nothing is happening. On submit, it was supposed to redirect to the link localhost:5000/dashboard-data and then print the data on the web page. Please if anybody could help me. I have provided as much details as I could.
This is dashboard.py
import os
import random
import pandas as pd
from flask import Flask, render_template , request
import sqlite3
import matplotlib.pyplot as plt
import matplotlib
PEOPLE_FOLDER = os.path.join('static')
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = PEOPLE_FOLDER
data = pd.read_excel('C:\\Users\\Desktop\\Projects\\api\\Tweet_qlik_first.xlsx')
from sqlalchemy import create_engine
engine = create_engine('sqlite://', echo=False)
@app.route('/', methods=['GET', 'POST'])
def homepage():
if request.method=='GET':
matplotlib.use('Agg')
data.to_sql('users', con=engine)
topic_list=engine.execute("SELECT distinct Topic FROM users").fetchall()
count=engine.execute('''Select Topic,count(*) from users group by Topic''',).fetchall()
print(count)
x = []
y = []
for tr in count:
x.append(tr[0])
y.append(tr[1])
plt.bar(x,y)
plt.gcf().subplots_adjust(bottom=0.15)
plt.xlabel('Topics')
plt.ylabel('Tweet Count')
ax = plt.gca()
plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right')
plt.tight_layout()
x=random.randint(0,9999999)
img_name='plot'+str(x)+'.png'
plt.savefig('static/'+img_name)
full_filename = os.path.join(app.config['UPLOAD_FOLDER'], img_name)
tl=[]
for tr in topic_list:
tl.append(tr[0])
return render_template("main.html",topics=tl,img=full_filename)
@app.route('/dashboard-data',methods=['GET','POST'])
def result():
if request.method=='POST':
result=request.form["topic_list"]
topic_fcount=engine.execute('''Select Topic,"Follower Count" from users where Topic=?''',(str(result),)).fetchall()
return render_template("dashboard.html")
if __name__ == "__main__":
app.run(debug=True)
This is main.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<form action = "/dashboard-data" method = 'POST'>
Topic
<select name="topic_list">
{% for each in topics %}
<option value="{{each}}" selected="{{each}}">{{each}}</option>
{% endfor %}
</select>
<input type="button" value="Submit"/>
</form>
</body>
<img src="{{ img }}" alt="User Image" >
</html>
This is dashboard.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
{% for row in topic_fcount %}
{{row}}
{%endfor%}
</body>
</html>
A:
Submit button of the form is not working. After submitting the form,it should redirect to the next page but nothing is happening.
Try changing the button to:
<input type="submit" value="Submit" />
|
[
"askubuntu",
"0000863124.txt"
] | Q:
systemd-journal + systemd-resolve + dnsmasq high cpu usage
on Ubuntu 16.10, for some minute after wifi connection, systemd-journal, systemd-resolve and dnsmasq tend to use almost 150% of CPU.
Is this normal?
A:
Suggested by another Steps of solution
Add the line DNSMASQ_EXCEPT=lo to /etc/default/dnsmasq
sudo nano /etc/default/dnsmasq
Restart dnsmasq via
sudo service systemd-resolved restart
Say Thanks If I helped, It went back to normal and does NOT screw around with other apps, as the previous method DID.
Cheers, Mark
|
[
"stackoverflow",
"0004325015.txt"
] | Q:
Static resource shared in merged dictionaries
I'm currently working on having dictionaries of styles and templates that I can dynamically apply to my application. Before this "new wanted" dynamical behavior, I had several resource dictionaries, one for each styled control, that I merged in the App.xaml:
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="ColorsDictionary.xaml"/>
<ResourceDictionary Source="ControlsTemplatesDictionary.xaml"/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Application.Resources>
Now, I'd like my application to be styled, so I decided to merge all my previous resources into a new one called "MyFirstTemplates" and to add only this dictionary to the App.xaml.
New dictionary "MyFirstTemplates.xaml":
<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">"
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="ColorsDictionary.xaml"/>
<ResourceDictionary Source="ControlsTemplatesDictionary.xaml"/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
New App.xaml:
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="MyFirstTemplates.xaml"/>
</ResourceDictionary.MergedDictionaries>
<Style TargetType="{x:Type Window}"/>
</ResourceDictionary>
</Application.Resources>
Note: The default style for the Window is to correct a bug of WPF 4, see Adding a Merged Dictionary to a Merged Dictionary
Now that I have made this change, I cannot use a color resource from "ColorsDictionary.xaml" as a StaticResource in "ControlsTemplateDictionary.xaml" anymore. If I change back to merging these files in the app.xaml, everything works. To make it work, I have to change these StaticResource for DynamicResource. Do you have any idea why this doesn't work anymore?
Thank you :-)
A:
By moving the dictionaries out of App.xaml the resources from each dictionary aren't in the other's resource tree during loading of MyFirstTemplates.xaml. Your original setup first loaded ColorsDictionary which was then available through App resources to ControlsTemplatesDictionary while it loaded. In your new setup, in order for the color resource to be available in App resources it needs to be loaded through MyFirstTemplates, which in turn requires loading of both dictionaries, which in turn requires access to the color resource... so it's sort of an infinite loop of references that can't be resolved statically. DynamicResource can wait until everything is loaded and then access the color without issue.
To fix either use Dynamic or merge ColorsDictionary directly into ControlsTemplatesDictionary.
|
[
"apple.stackexchange",
"0000282385.txt"
] | Q:
8 year support on MacBook Pros
I currently use a "Mid 2010" MacBook Pro and was able to install Sierra and XCode 8.3 without any issues.
I would like to use this MBP for some two or three more years, however a person told me that Apple has an "8 year support window" and that 2017 is the last year in which I'll be able to update to the latest OS and install the latest XCode, etc.
I've Googled but couldn't find anything stating a similar policy. Does this have any truth to it? Is it possible to know how much longer I'll be able to install MacOS and XCode updates?
A:
There is no official timeframe that Apple hardware will support the latest operating system or versions of particular software. It will vary depending on the type of enhancements that occur in hardware and what impact this has in terms of what the latest software needs in order to be able to perform certain tasks.
The person who told you that Apple has an "8 year support window" may have been getting confused with Apple's policy on Vintage and obsolete products, but this relates only to hardware/parts support and has no implication in terms of the operating system and other software.
In summary:
Vintage products are those that have not been manufactured for more than 5 and less than 7 years
Obsolete products are those that were discontinued more than 7 years ago.
However, even the above will have some exceptions depending on local laws.
Using macOS Sierra as a guide, it is officially supported on:
iMac: Late 2009 or newer
MacBook and MacBook Retina: Late 2009 or newer
MacBook Pro: Mid 2010 or newer
MacBook Air: Late 2010 or newer
Mac mini: Mid 2010 or newer
Mac Pro: Mid 2010 or newer
Source: Apple
As you can see, in some cases support goes back to 2009 models, while in others it goes back to 2010 models. Therefore, there is no official timeframe that Apple hardware will support the latest operating system or versions of particular software.
|
[
"stackoverflow",
"0052972258.txt"
] | Q:
Guarantee limiting AWS Lambda functions to specified budget
I'm new to AWS Lambda (and AWS in general). I need to write some development code for AWS.
Since Lambda functions are billed by execution time and number of requests, I'd like to guarantee that no out-of-control function or route spam would skyrocket costs out of my budget and put me in debt. (It is development code, so I expect there to be mistakes and I don't want them to be expensive ones.)
I know AWS has budget alarms which send you emails, but this is not good enough for me, since it might take days/weeks until I notice a message somewhere.
Is there a way to tell AWS to shut down a service if it is exceeding a budget? I'm looking for something similar to what DigitalOcean does, where you can set a fixed budget.
A:
Create a lambda, where its purpose it will be delete lambda's source code deployed (s3 bucket).
Then:
Create billing alarm
Define your metrics Ex: <= 5USD
Create sns topic
Subscribe where endpoint must be lambda function
Some like:
|
[
"stackoverflow",
"0021891877.txt"
] | Q:
Returning a login error message using Shiro
Filters and redirects are not my strong point.
I have Shiro set up and working in Spring except that I'd like to return an error message on an invalid login while staying of the same page. So I'm causing a invalid login. I've got ShiroFilterFactoryBean set with a property that should send it to /ldapLoginErr which then I map to login.jsp and then process in a error function in my Controller. But I'm getting a 404 and the url is pointing to my base url instead /ldapLoginErr or /ldapLogin.
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix" value="/WEB-INF/jsp/" />
<property name="suffix" value=".jsp" />
</bean>
<mvc:view-controller path="/ldapLogin" view-name="ldapLogin" />
<mvc:view-controller path="/ldapLoginSuccess" view-name="ldapLogin" />
<mvc:view-controller path="/ldapLoginErr" view-name="ldapLogin" />
<bean id="shiroFilter" class="org.apache.shiro.spring.web.ShiroFilterFactoryBean">
<property name="securityManager" ref="securityManager"/>
<property name="loginUrl" value="/ldapLogin"/>
<property name="unauthorizedUrl" value="/ldapLogin"/>
<property name="successUrl" value="/ldapLogin"/>
<property name="filterChainDefinitions">
<value>
[urls]
/** = ssl[8443],authc, customAuthFilter
[main]
/logout = logout
</value>
</property>
</bean>
@RequestMapping(value = "/ldapLogin", method = RequestMethod.GET)
public ModelAndView login(Model model, HttpSession session){
logger.debug("start login controller function");
ModelAndView mav = new ModelAndView();
return mav;
}
@RequestMapping(value = "/ldapLoginErr", method = RequestMethod.GET)
public ModelAndView loginErr(Model model, HttpSession session){
ModelAndView mav = new ModelAndView();
mav.addObject("errorMessage", msgSrc.getMessage("auth.notauth", null, null, null));
return mav;
}
@RequestMapping(value = "/ldapLoginSuccess", method = RequestMethod.GET)
public ModelAndView loginSuccess(Model model, HttpSession session){
}
The following didn't work either:
<bean id="shiroFilter" class="org.apache.shiro.spring.web.ShiroFilterFactoryBean">
<property name="securityManager" ref="securityManager"/>
<property name="loginUrl" value="/ldapLogin"/>
<property name="unauthorizedUrl" value="/ldapLoginErr"/>
<property name="successUrl" value="/ldapLoginSuccess"/>
Thanks for any help
A:
When authentication fails, Shiro redirects to the login page and sets a request attribute named shiroLoginFailure. So in my login.jsp page I added the following:
<c:if test="${shiroLoginFailure != null}">
Username or password incorrect
</c:if>
If you are not using JSTL and EL, you can use JSP scriptlets:
<%
if (request.getAttribute("shiroLoginFailure")!=null) {
%>
Username or password incorrect
<%
}
%>
A:
That seems convoluted.
Create a controller that had a GET and POST endpoint mapped to /login
GET returns the view for the login page.
POST handles the call to shiro login.
Authc filter
<bean id="authc" class="org.apache.shiro.web.filter.authc.PassThruAuthenticationFilter">
<property name="loginUrl" value="/login"/>
</bean>
Filter chain def
<property name="filterChainDefinitions">
<value>
/login = authc
/logout = logout
/secure/** = authc
</value>
</property>
Controller
@RequestMapping(method = RequestMethod.GET)
public ModelAndView view() {
return new ModelAndView(view);
}
@RequestMapping(method = RequestMethod.POST)
public ModelAndView login(HttpServletRequest req, HttpServletResponse res, LoginForm loginForm) {
try {
Subject currentUser = SecurityUtils.getSubject();
currentUser.login(new UsernamePasswordToken(loginForm.getUsername(), loginForm.getPassword());
WebUtils.redirectToSavedRequest(req, res, fallBackUlr);
return null; //redirect
} catch(AuthenticationException e) {
ModelAndView mav = new ModelAndView(view)
mav.addObject("errorMessage", "error");
return mav;
}
}
|
[
"stackoverflow",
"0025874763.txt"
] | Q:
MVC Controller with optional parameter in F#
I create controller with optional parameter accordingly:
type ProductController(repository : IProductRepository) =
inherit Controller()
member this.List (?page1 : int) =
let page = defaultArg page1 1
When I have started application it gives me error:
"System.MissingMethodException: No parameterless constructor defined for this object."
I know this error from dependency injection, here is my Ninject settings:
static let RegisterServices(kernel: IKernel) =
System.Web.Http.GlobalConfiguration.Configuration.DependencyResolver <- new NinjectResolver(kernel)
let instance = Mock<IProductRepository>()
.Setup(fun m -> <@ m.Products @>)
.Returns([
new Product(1, "Football", "", 25M, "");
new Product(2, "Surf board", "", 179M, "");
new Product(3, "Running shoes", "", 95M, "")
]).Create()
kernel.Bind<IProductRepository>().ToConstant(instance) |> ignore
do()
The issue is when I remove my optional parameter from controller all is working fine. When change parameter for regular one it give me following error:
The parameters dictionary contains a null entry for parameter 'page' of non-nullable type 'System.Int32' for method 'System.Web.Mvc.ViewResult List(Int32)' in 'FSharpStore.WebUI.ProductController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter.
Parameter name: parameters
This is my routes settings:
static member RegisterRoutes(routes:RouteCollection) =
routes.IgnoreRoute("{resource}.axd/{*pathInfo}")
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
{ controller = "Product"; action = "List"; id = UrlParameter.Optional } // Parameter defaults
) |> ignore
Have anyone done optional parameter for controller? I'm working on pilot project for my colleague to promote F# to our stack. Thanks
A:
Another F# - C# interop pain :P
The F# optional parameter is a valid optional parameter for F# callers, however in C# this parameters will be FsharpOption<T> objects.
In your case you must use a Nullable<T>. So, the code looks like:
open System.Web.Mvc
open System
[<HandleError>]
type HomeController() =
inherit Controller()
member this.Hello(code:Nullable<int>) =
this.View() :> ActionResult
|
[
"politics.stackexchange",
"0000031325.txt"
] | Q:
Can I vote for candidates from a different district in the New Jersey Democratic primaries?
New Jersey is having Democratic primary voting tomorrow. Can I only vote for the candidates that live in my district?
For example, if I live in District 1, can I only vote for candidates in District 1? Or can I also vote for candidates in District 2?
A:
For example, if I live in District 1, can I only vote for candidates in District 1?
Yes, this. I've never voted in New Jersey, but in most places, your entire precinct will be in the same district and only able to vote in the races that apply to it. You won't even see district 2 candidates on the ballot.
Only one district per voter is a matter of federal law. You can only belong in one district and only vote in that district. There used to be at-large districts such that when there were two, your vote could count in both. However, this was used to prevent black representation, as a state that was majority white could allow the white majority to elect all the representatives. So the law was changed such that each person can only vote in one district.
|
[
"stackoverflow",
"0021225061.txt"
] | Q:
Images clipped using SVG paths are reversed in Chrome
I am having issues with an SVG clipping mask that's applied to an image. This works correctly in Firefox, but in Chrome and IE the clipping mask works in reverse (not had a chance to try other browsers yet).
Here's what I mean-
Firefox
Chrome/IE
<svg height="0" width="0" >
<defs>
<clipPath id="clipPath" stroke="white" stroke-width="10">
<path d="M252.294,0.26l-203.586,0c0,0-47.43,1.586-48.207,38.876c0.777,37.29,48.207,38.877,48.207,38.877h203.586
c0,0,47.43-1.587,48.207-38.877C299.724,1.847,252.294,0.26,252.294,0.26z"/>
</clipPath>
</defs>
<div id='board_img_1' class='board_imgs'>
<img src="./images/board1.png" style=" clip-path: url(#clipPath);
width: 100%;
height: 100%;"></div>
<div id='board_img_2' class='board_imgs'>
<img src="./images/board2.png" style=" clip-path: url(#clipPath);
width: 100%;
height: 100%;"></div>
</svg>
Here's my HTML. I'm not sure where to begin even trying to fix this and it seems like a fairly specific issue.
A:
As Michael Mullany suggested, try changing img to image and changing your div tags.
http://www.w3schools.com/svg/svg_reference.asp
Here is a page that might help with regards to what you can/can not use.
There is also some examples of how to use SVG here:
http://www.w3schools.com/svg/svg_examples.asp
Lastly, check out this link for browser support for SVG and its various uses:
http://caniuse.com/#cats=SVG
|
[
"academia.stackexchange",
"0000009712.txt"
] | Q:
Master of Engineering vs Master of Science?
I am a Final Year Mechanical Engineering student.
I want to do my Masters in the Design Engineering field.
Currently I can see 2 options:
Master of Science - offered in many Universities ex: GeorgiaTech.
It deals more with research. It requires a research thesis most of the times.
It gets completed in about 2 years.
Master of Engineering - offered in few Universities ex: Cornell University.
It deals more with developing your skills required for doing a job in that field.
It does not require a research thesis. It gets completed in a year.
So, my question is:
1. What exactly do we learn/develop skills/improve/do, etc in each of these programs.
I am unable to get a clear picture as to what exactly will be the change in me brought by each of these programs?
What distinguishes a M.S. Graduate from a M.Eng. Graduate & vice versa?
And which one of these Graduates get jobs in Companies easily in the related field?
A:
The main practical difference between the two degrees, as you point out, is the requirement of a research thesis for the MS degree. Generally, if you have aspirations of eventually getting a PhD, you should strongly consider the MS, as research experience or potential is a large factor in being admitted to a PhD program.
To answer your specific question, you should be a better researcher after completing an MS, and you will be better prepared for further graduate work. With an ME degree (considered a "terminal" degree), you'll simply have a Master's degree and (potentially) may have spent more time on your coursework. Whether or not this prepares you better for a position in industry is debatable -- as you say, many ME programs are one year, which may actually include fewer classes than an equivalent MS (although I've generally seen ME programs that have one or two more classes as a requirement than the equivalent MS degree).
As to which degree leads to more jobs in industry, I'd say it's probably about the same. Getting an MS will certainly not limit your competitiveness for industry jobs, while (as I already mentioned), an ME may limit your competitiveness for PhD programs.
|
[
"stackoverflow",
"0020314478.txt"
] | Q:
Is it possible to declare a method with block as default value?
I want to write a method which takes a block and if no block given it should use a default block. So I want to have something like this:
def say_hello(name, &block = ->(name) { puts "Hi, #{name}" })
# do something
end
But when I'm trying to do so I'm getting the syntax error.
I know I can deal with my problem using block_given?. But I am interested in first approach.
Am I missing something or this is just not possible?
A:
Some answers suggest using block_given?, but since there is no possibility that a block would be nil or false when it is given, you can simply use ||=.
def say_hello(name, &block)
block ||= ->(name){puts "Hi, #{name}"}
# do something
end
|
[
"stackoverflow",
"0034064328.txt"
] | Q:
C++ How to avoid race conditions when transfering money at bank accounts
I'm kind of stuck here.....
I want to transfer money from one bank account to another. There are a bunch of users and each user is a thread doing some transactions on bank accounts.
I tried different solutions but it seems that it always results in a race condition when doing transactions. The code i have is this one:
#include <mutex>
class Account
{
private:
std::string name_;
unsigned int balance_;
std::mutex classMutex_;
public:
Account(std::string name, unsigned int balance);
virtual ~Account();
void makePayment_sync(unsigned int payment);
void takeMoney_sync(unsigned int payout);
void makeTransaction_sync(unsigned int transaction, Account& toAccount);
};
unsigned int Account::getBalance_sync()
{
std::lock_guard<std::mutex> guard(classMutex_);
return balance_;
}
void Account::makePayment_sync(unsigned int payment)
{
std::lock_guard<std::mutex> guard(classMutex_);
balance_ += payment;
}
void Account::takeMoney_sync(unsigned int payout)
{
std::lock_guard<std::mutex> guard(classMutex_);
balance_ -= payout;
}
void Account::makeTransaction_sync(unsigned int transaction, Account& toAccount)
{
std::lock_guard<std::mutex> lock(classMutex_);
this->balance_ -= transaction;
toAccount.balance_ += transaction;
}
Note: I called the methods foo_sync because there should be also a case where there result should show race conditions.
But yeah I'm kind of stuck here...tried also this method, where i created a new mutex: mutex_
class Account
{
private:
std::string name_;
unsigned int balance_;
std::mutex classMutex_, mutex_;
...
void Account::makeTransaction_sync(unsigned int transaction, Account& toAccount)
{
std::unique_lock<std::mutex> lock1(this->mutex_, std::defer_lock);
std::unique_lock<std::mutex> lock2(toAccount.mutex_, std::defer_lock);
// lock both unique_locks without deadlock
std::lock(lock1, lock2);
this->balance_ -= transaction;
toAccount.balance_ += transaction;
}
but I got some weird errors during runtime! Any suggestions/hints/ideas to solve this problem! Thanks in advance :)
A:
OK, here's what I think is a reasonable starting point for your class.
It's not the only way to do it, but there are some principles used that I adopt in my projects. See comments inline for explanations.
This is a complete example. for clang/gcc compile and run with:
c++ -o payment -O2 -std=c++11 payment.cpp && ./payment
If you require further clarification, please feel free to ask:
#include <iostream>
#include <mutex>
#include <cassert>
#include <stdexcept>
#include <thread>
#include <vector>
#include <random>
class Account
{
using mutex_type = std::mutex;
using lock_type = std::unique_lock<mutex_type>;
std::string name_;
int balance_;
// mutable because we'll want to be able to lock a const Account in order to get a balance
mutable mutex_type classMutex_;
public:
Account(std::string name, int balance)
: name_(std::move(name))
, balance_(balance)
{}
// public interface takes a lock and then defers to internal interface
void makePayment(int payment) {
auto lock = lock_type(classMutex_);
modify(lock, payment);
}
void takeMoney(int payout) {
makePayment(-payout);
}
int balance() const {
auto my_lock = lock_type(classMutex_);
return balance_;
}
void transfer_to(Account& destination, int amount)
{
// try/catch in case one part of the transaction threw an exception.
// we don't want to lose money in such a case
try {
std::lock(classMutex_, destination.classMutex_);
auto my_lock = lock_type(classMutex_, std::adopt_lock);
auto his_lock = lock_type(destination.classMutex_, std::adopt_lock);
modify(my_lock, -amount);
try {
destination.modify(his_lock, amount);
} catch(...) {
modify(my_lock, amount);
std::throw_with_nested(std::runtime_error("failed to transfer into other account"));
}
} catch(...) {
std::throw_with_nested(std::runtime_error("failed to transfer from my account"));
}
}
// provide a universal write
template<class StreamType>
StreamType& write(StreamType& os) const {
auto my_lock = lock_type(classMutex_);
return os << name_ << " = " << balance_;
}
private:
void modify(const lock_type& lock, unsigned int amount)
{
// for internal interfaces where the mutex is expected to be locked,
// i like to pass a reference to the lock.
// then I can assert that all preconditions are met
// precondition 1 : the lock is active
assert(lock.owns_lock());
// precondition 2 : the lock is actually locking our mutex
assert(lock.mutex() == &classMutex_);
balance_ += amount;
}
};
// public overload or ostreams, loggers etc
template<class StreamType>
StreamType& operator<<(StreamType& os, const Account& a) {
return a.write(os);
}
void blip()
{
using namespace std;
static mutex m;
lock_guard<mutex> l(m);
cout << '.';
cout.flush();
}
// a test function to peturb the accounts
void thrash(Account& a, Account& b)
{
auto gen = std::default_random_engine(std::random_device()());
auto amount_dist = std::uniform_int_distribution<int>(1, 20);
auto dist = std::uniform_int_distribution<int>(0, 1);
for (int i = 0 ; i < 10000 ; ++i)
{
if ((i % 1000) == 0)
blip();
auto which = dist(gen);
auto amount = amount_dist(gen);
// make sure we transfer in both directions in order to
// cause std::lock() to resolve deadlocks
if (which == 0)
{
b.takeMoney(1);
a.transfer_to(b, amount);
a.makePayment(1);
}
else {
a.takeMoney(1);
b.transfer_to(a, amount);
b.makePayment(1);
}
}
}
auto main() -> int
{
using namespace std;
Account a("account 1", 100);
Account b("account 2", 0);
cout << "a : " << a << endl;
cout << "b : " << b << endl;
// thrash 50 threads to give it a thorough test
vector<thread> threads;
for(int i = 0 ; i < 50 ; ++i) {
threads.emplace_back(std::bind(thrash, ref(a), ref(b)));
}
for (auto& t : threads) {
if (t.joinable())
t.join();
}
cout << endl;
cout << "a : " << a << endl;
cout << "b : " << b << endl;
// check that no money was lost
assert(a.balance() + b.balance() == 100);
return 0;
}
example output:
a : account 1 = 100
b : account 2 = 0
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
a : account 1 = 7338
b : account 2 = -7238
|
[
"stackoverflow",
"0009436964.txt"
] | Q:
Close popup from child button's press?
I have a Popup that contains a "close" button. The popup is opened by a toggle button (its IsOpen property is bound to a ToggleButton as provided by this answer). How can I close the popup when the button is pressed? This is my XAML:
<Canvas x:Name="LayoutRoot">
<ToggleButton x:Name="ToggleButton"
Style="{DynamicResource ToggleButtonStyle}" Height="51" Canvas.Left="2.999" Width="50.333" IsHitTestVisible="{Binding ElementName=Popup, Path=IsOpen, Mode=OneWay, Converter={StaticResource BoolInverter}}"/>
<Popup x:Name="Popup" IsOpen="{Binding IsChecked, ElementName=ToggleButton}" StaysOpen="False" AllowsTransparency="True">
<Canvas Height="550" Width="550">
<Grid Height="500" Width="500" Canvas.Left="25" Canvas.Top="25" d:LayoutOverrides="Width, Height, Margin">
<Grid.Effect>
<DropShadowEffect BlurRadius="15" ShadowDepth="0"/>
</Grid.Effect>
<Grid.RowDefinitions>
<RowDefinition Height="0.132*"/>
<RowDefinition Height="0.868*"/>
</Grid.RowDefinitions>
<Rectangle x:Name="Background" Fill="#FFF4F4F5" Margin="0" Stroke="Black" RadiusX="6" RadiusY="6" Grid.RowSpan="2"/>
<Border x:Name="TitleBar" BorderThickness="1" Height="70" VerticalAlignment="Top" Margin="0,0.5,0,0" CornerRadius="5">
<DockPanel>
<TextBlock TextWrapping="Wrap" Text="FOOBAR POPUP TITLE" FontSize="24" FontFamily="Arial Narrow" Margin="17,0,0,0" d:LayoutOverrides="Height" VerticalAlignment="Center" FontWeight="Bold"/>
<Button x:Name="CloseButton" Content="Button" VerticalAlignment="Center" DockPanel.Dock="Right" HorizontalAlignment="Right" Margin="0,0,13,0" Style="{DynamicResource CloseButtonStyle}"/>
</DockPanel>
</Border>
<Border BorderThickness="1" Height="413" Grid.Row="1" Background="#FF2F2F2F" Margin="12">
<Rectangle Fill="#FFF4F4F5" RadiusY="6" RadiusX="6" Stroke="Black" Margin="12"/>
</Border>
</Grid>
</Canvas>
</Popup>
</Canvas>
A:
A better approach than code behind is to use an event trigger on the button click event:
<Button>
<Button.Triggers>
<EventTrigger RoutedEvent="Button.Click">
<BeginStoryboard>
<Storyboard>
<BooleanAnimationUsingKeyFrames Storyboard.TargetProperty="IsChecked" Storyboard.TargetName="ToggleButton">
<DiscreteBooleanKeyFrame KeyTime="0:0:0" Value="False" />
</BooleanAnimationUsingKeyFrames>
</Storyboard>
</BeginStoryboard>
</EventTrigger>
</Button.Triggers>
</Button>
Disclaimer: I haven't run this code through VS so it might have a typo or 2
|
[
"stackoverflow",
"0052248086.txt"
] | Q:
PHP echo javascript - Not echoing $
I am echoing a javascript block using PHP like this...
echo "<script language='javascript' type='text/javascript'>
jQuery(document).ready(function($){
var $lg = $('#mydiv');
});
";
But I am getting the following error message...
Notice: Undefined variable: lg
When I inspect the source, the line that defines $lg looks like this...
var = $('#mydiv');
Any ideas why it is happening? Does the $ need escaping?
A:
When using double quotes in PHP, variables are interpolated inside strings, for example:
$name = "Elias";
echo "My name is $name";
This will print My name is Elias.
If you want to use $ inside an string, you must escape it or use single quotes:
$name = "Elias";
echo "I love the variable \$name";
echo 'I love the variable $name';
Both echos will print I love the variable $name
Also, due to the use of double quotes, you are using single quotes to the html inside your string. This is an invalid HTML, though the browser parse it correctly. (Actually it's valid, sorry)
The right way to do it is to use single quotes to your string, or escaping the double quotes:
echo "<script language=\"javascript\" type=\"text/javascript\">";
// or
echo '<script language="javascript" type="text/javascript">';
|
[
"gis.stackexchange",
"0000111551.txt"
] | Q:
featureId filtering in wfs openlayers
I want to define a featureId fiter for wfs layer like below:
wfs = new OpenLayers.Layer.Vector("WFS Vectore", {
strategies: [new OpenLayers.Strategy.BBOX(), saveStrategy],
projection: new OpenLayers.Projection("EPSG:4326"),
protocol: new OpenLayers.Protocol.WFS({
version: "1.1.0",
srsName: "EPSG:4326",
url: "http://localhost:8080/geoserver/iran/wms?service=WFS",
featureType: "population",
featureNS: "http://iran.kadaster.org",
geometryName: "the_geom"
}),
filter:
new OpenLayers.Filter.FeatureId({
fids: ['population.913', 'population.912']
//type: ?????
})
});
I don't know to what set 'type' variable in filtering option?
A:
The problem was set fids field. When you set fids to ['population.913', 'population.912'], this means a feature that it's fids equal to 'population.913' and 'population9.12'. If you set fids field to 'population.912' or 'population.913' you get correct response
|
[
"stackoverflow",
"0040100726.txt"
] | Q:
Storing Image In Database Without JQuery
Good day stackers,
I'm working on a web project in which we recreate a social media site similar to snapchat. I'm using my webcam to take pictures using JS, and I'm writing the picture to a var called img as follows:
var img = canvas.toDataURL("image/png");
We are required to use PDO in PHP to access the database. Alternatively we can use AJAX, but JQuery is strictly forbidden. My question is, how can I store the DataURL inside my database? All the tutorials online use JQuery.
Update:
I followed the steps as suggested below, but when I hit the snap button, it still only takes the picture, but no URL or nothing.
function sendimagetourl(image)
{
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function()
{
if (this.readyState == 4 && this.status == 200)
{
alert( this.resoponseText);
}
}
xhtp.open("GET", "saveimage.php?url="+image, true);
xhttp.send();
}
//Stream Video
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices.getUserMedia({video: true}).then(function(stream) {
video.src = window.URL.createObjectURL(stream);
video.play();
});
}
//Snap Photo
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
var video = document.getElementById('video');
document.getElementById("snap").addEventListener("click", function() {context.drawImage(video, 0, 0, 800, 600);var image = canvas.toDataURL("image/png"); sendimagetourl(image);});
So it appears I was a bit of a fool. There were two minor typos, and it seems like the data sent is invisible in the url. Thanks for all the help!
A:
You can use javascript ajax
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
alert( this.responseText);
}
};
xhttp.open("GET", "saveimage.php?url="+url, true);
xhttp.send();
|
[
"stackoverflow",
"0025537140.txt"
] | Q:
Facebook Unity Plugin v6.0 - crash on Kindle Fire if you login, cancel login, then login again
(This isn't a question, it's a bug report.)
Facebook Unity Plugin v6.0 - crash on Kindle Fire if you login, cancel login, then login again.
I'm on Unity 4.5.3 (latest).
You can reproduce this in the Facebook test scene.
Install to your Kindle Fire
I don't have the Facebook app installed, so it's using the web flows (not the native FB app)
Press FB.Init
Press Login
Cancel the login via the "X" button
Press Login again
Crash
Any workarounds?
A:
Ok have a work around. BTW this is also a problem on Galaxy 3, my guess is most devices if not all devices.
Fix. In the login call back if you did not login correctly ie is_logged_in false then call FB.Init(somemethod) again. Yes you will get a warning about it not suppose to be called twice but it works. Just make sure if you are logging into facebook in the called back of you init create a new empty one otherwise round and round you go.
Steps to repro are the same basically as yours however you can also get it with the back button and canceling permissions.
Login.
Back button when it takes you to facebook OR cancel on permissions.
Take you back to app.
Login again.
Crash.
Also appears to only happen on the facebook app itself NOT if you dont have FB installed and using the browser.
|
[
"pt.stackoverflow",
"0000046748.txt"
] | Q:
asp.net servindo arquivos estaticos
Tenho uma aplicação em mvc3 (asp.net 4.0, dotnet 4.0),
rodando em iis8 (mas também está rodando no iis 7 e iis 7.5).
Dentro da aplicação tenho uma pasta chamada /dados,
exemplo localhost/minhaapp/dados.
Meus clientes salvam arquivos HTML dentro dessa pasta,
e esses arquivos HTML são acessados pelo sistema feito em mvc3.
Mas os usuários estão acessando os arquivos diretamente pela URL do browser,
exemplo localhost/minhaapp/dados/relatorio1.html, ao invés
de acessar por uma opção interna do sistema feito em mvc3.
Tem como bloquear o acesso via método GET dos arquivos HTML da pasta /dados?
Mas quero que o sistema feito em mvc3 consiga acessar esses arquivos via
método POST.
Esse não é o método mais seguro, deve existir muitas outras possibilidade,
mas se conseguir isso já estaria ótimo.
Tentei várias formas, rewrite, módulos, urlmapping, mvc controler, etc,
mas nenhuma dessas formas captura o requisição do browser para arquivos estáticos.
Sei que arquivo estáticos são servidos diretamente pelo iis.
Mas tem alguma forma do asp.net intercetar requisições de arquivos estático?
Se não for possível via asp.net, tem alguma configuração que funcione nos
iss 8, 7.5 r 7?
Não posso alterar para outra pasta, nem salvar os arquivos de outra forma, ou seja,
por questões internas (políticas internas, fato de já ser assim e muito difundido e
usados por vários clientes), a forma e o local como os arquivos são salvos,
NÃO pode ser alterado.
A:
Você informa ao ISS que toda requisição deve passar por ele. O IIS não aplica permissão sobre arquivos estáticos.
Em seu Web.config coloque o seguinte código:
<configuration>
...
<system.webServer>
<modules runAllManagedModulesForAllRequests="true" />
</system.webServer>
...
</configuration>
|
[
"history.stackexchange",
"0000009508.txt"
] | Q:
What happened to sacrificed animals in ancient Greece?
I was wondering, what happened to animals that were sacrificed in the ancient Greece. For instance, when Pythagoras created his theorem he made a 100 oxen sacrifice.
Was the sacrifice considered a major feast with the animals eaten in the process, or were they left to rot - To be eaten by "gods"?
If the animals were eaten and the bones were burnt, what happened to the ashes?
A:
Cthonic sacrifices generally resulted in the animals being burnt entire. Totally cremating doves meant the smell of burnt feathers as well as burning meat.
Normally sacrifices resulted in bones and fat being burnt for the gods on high altars. I suspect the height was not only part of the spectacle but got the greasy smoke above the heads of the crowd rather than driving them away.
Temples had big kettles as part of their normal equipment. The full-time priests took the meat and boiled it like pot roast (no veggies mentioned). They were the cooks. The meat was then shared out among the congregation as a communal sacramental meal. It was said that for poor men this might be the only meat they ate.
So you didn't watch the sacrifice and go home. Waiting for the rest of the ceremony gave you time to socialize with other Greeks and citizens.
SOURCES:
Pausanias. Guide to Greece trans by Peter Levy because the notes are so good.
EDIT: About those ashes, the only thing I can find is in one of Pausanias's chapters on Olympia (he has two), where the ashes are mixed with water and plastered onto the hill on top of which the altar stands. This ash-hill was out in the open so it seems it never got too huge. If this had been normal, he would not have remarked on it.
|
[
"stackoverflow",
"0004548145.txt"
] | Q:
Low level details of inheritance and polymorphism
This question is one of the big doubts that looms around my head and is also hard to describe it in terms of words . Some times it seems obvious and sometimes a tough one to crack.So the question goes like this::
class Base{
public:
int a_number;
Base(){}
virtual void function1() {}
virtual void function2() {}
void function3() {}
};
class Derived:public Base{
public:
Derived():Base() {}
void function1() {cout << "Derived from Base" << endl;
virtual void function4() {cout << "Only in derived" << endl;}
};
int main(){
Derived *der_ptr = new Derived();
Base *b_ptr = der_ptr; // As just address is being passed , b_ptr points to derived object
b_ptr -> function4(); // Will Give Compilation ERROR!!
b_ptr -> function1(); // Calls the Derived class overridden method
return 0;
}
Q1. Though b_ptr is pointing to Derived object, to which VTABLE it accesses and HOW ? as b_ptr -> function4() gives compilation error. Or is it that b_ptr can access only upto that size of Base class VTABLE in Derived VTABLE?
Q2. Since the memory layout of the Derived must be (Base,Derived) , is the VTABLE of the Base class also included in the memory layout of the Derived class?
Q3. Since the function1 and function2 of base class Vtable points to the Base class implementation and function2 of Derived class points to function2 of Base class, Is there really a need of VTABLE in the Base class?? (This might be the dumbest question I can ever ask, but still I am in doubt about this in my present state and the answer must be related to answer of Q1 :) )
Please Comment.
Thanks for the patience.
A:
As a further illustration, here is a C version of your C++ program, showing vtables and all.
#include <stdlib.h>
#include <stdio.h>
typedef struct Base Base;
struct Base_vtable_layout{
void (*function1)(Base*);
void (*function2)(Base*);
};
struct Base{
struct Base_vtable_layout* vtable_ptr;
int a_number;
};
void Base_function1(Base* this){}
void Base_function2(Base* this){}
void Base_function3(Base* this){}
struct Base_vtable_layout Base_vtable = {
&Base_function1,
&Base_function2
};
void Base_Base(Base* this){
this->vtable_ptr = &Base_vtable;
};
Base* new_Base(){
Base *res = (Base*)malloc(sizeof(Base));
Base_Base(res);
return res;
}
typedef struct Derived Derived;
struct Derived_vtable_layout{
struct Base_vtable_layout base;
void (*function4)(Derived*);
};
struct Derived{
struct Base base;
};
void Derived_function1(Base* _this){
Derived *this = (Derived*)_this;
printf("Derived from Base\n");
}
void Derived_function4(Derived* this){
printf("Only in derived\n");
}
struct Derived_vtable_layout Derived_vtable =
{
{ &Derived_function1,
&Base_function2},
&Derived_function4
};
void Derived_Derived(Derived* this)
{
Base_Base((Base*)this);
this->base.vtable_ptr = (struct Base_vtable_layout*)&Derived_vtable;
}
Derived* new_Derived(){
Derived *res = (Derived*)malloc(sizeof(Derived));
Derived_Derived(res);
return res;
}
int main(){
Derived *der_ptr = new_Derived();
Base *b_ptr = &der_ptr->base;
/* b_ptr->vtable_ptr->function4(b_ptr); Will Give Compilation ERROR!! */
b_ptr->vtable_ptr->function1(b_ptr);
return 0;
}
A:
Q1 - name resolution is static. Since b_ptr is of type Base*, the compiler can't see any of the names unique to Derived in order to access their entries in the v_table.
Q2 - Maybe, maybe not. You have to remember that the vtable itself is simply a very common method of implementing runtime polymorphism and is actually not part of the standard anywhere. No definitive statement can be made about where it resides. The vtable could actually be some static table somewhere in the program that is pointed to from within the object description of instances.
Q3 - If there's a virtual entry in one place there must be in all places otherwise a bunch of difficult/impossible checks would be necessary to provide override capability. If the compiler KNOWS that you have a Base and are calling an overridden function though, it is not required to access the vtable but could simply use the function directly; it can even inline it if it wants.
A:
A1. The vtable pointer is pointing to a Derived vtable, but the compiler doesn't know that. You told it to treat it as a Base pointer, so it can only call methods that are valid for the Base class, no matter what the pointer points to.
A2. The vtable layout is not specified by the standard, it isn't even officially part of the class. It's just the 99.99% most common implementation method. The vtable isn't part of the object layout, but there's a pointer to the vtable that's a hidden member of the object. It will always be in the same relative location in the object so that the compiler can always generate code to access it, no matter which class pointer it has. Things get more complicated with multiple inheritance, but lets not go there yet.
A3. Vtables exist once per class, not once per object. The compiler needs to generate one even if it never gets used, because it doesn't know that ahead of time.
|
[
"stackoverflow",
"0000030321.txt"
] | Q:
How to store Application Messages for a .NET Website
I am looking for a method of storing Application Messages, such as
"You have logged in successfully"
"An error has occurred, please call the helpdesk on x100"
"You do not have the authority to reset all system passwords" etc
So that "when" the users decide they don't like the wording of messages I don't have to change the source code, recompile then redeploy - instead I just change the message store.
I really like the way that I can easily access strings in the web.config using keys and values.
ConfigurationManager.AppSettings("LOGINSUCCESS");
However as I could have a large number of application messages I didn't want to use the web.config directly. I was going to add a 2nd web config file and use that but of course you can only have one per virtual directory.
Does anyone have any suggestions on how to do this without writing much custom code?
A:
In your Web.config, under appSettings, change it to:
<appSettings file="StringKeys.config">
Then, create your StringKeys.config file and have all your keys in it.
You can still use the AppSettings area in the main web.config for any real application related keys.
A:
Put the strings in an xml file and use a filewatcher to check for updates to the file
Put the strings in a database, cache them and set a reasonable expiration policy
|
[
"stackoverflow",
"0011730720.txt"
] | Q:
Anyway to get div with position:absolute outside of div with position:relative?
I have a structure like this:
<div class="a">
<div class="b">
<div>
<div class="c">
</div>
</div>
</div>
</div>
CSS:
.a { position:relative; }
.b { position:absolute; }
I understand that defining top and left/right properties positions absolute div to either its parent with position:relative or to the browsers window if such a parent doesn't exist. What I'm faced with, I cannot change the CSS for .a and .b. And I need .c to be on top of .a and slightly out of it. So that .a doesn't get a scroll bar.
Some ASCII art to illustrate, I guess :)
I have:
-------------------
| .a |^|
| | |<-- Scroll bar
| ------ | |
| | .c | |*|
-------------------
I need:
--------------------
| .a |
| |<-- No scroll bar
| ------ |
| | .c | |
----| | ---------
| |
------
A:
This solution will stack all items and the <div class="c"> will reach out of its parent container:
CSS
<style>
.a {
position:relative;
width:200px;
height:200px;
border:1px solid red;
background:#eee;
}
.b {
position:absolute;
top:20px;
left:20px;
width:200px;
height:200px;
border:1px solid blue;
background:#ccc;
}
.c {
position:absolute;
bottom:-50px;
left:20px;
width:100px;
height:100px;
border:1px solid orange;
background:#aaa;
}
</style>
HTML
<div class="a">
<div class="b">
<div>
CONTENT
<div class="c"></div>
</div>
</div>
</div>
Note that this will only work, if the parent container has overflow:visible. When one of the parents has overflow:hidden|scroll you can't solve this, I guess.
|
[
"stackoverflow",
"0026521062.txt"
] | Q:
Mysql add variable in select statement
I have a query like this
SELECT a.id AS col1, b.id AS col2, b.title AS col3
FROM tbl1 a
INNER JOIN tbl2 b ON a.tbl2_id=b.id
this query is working properly, now if i make a subquery and pass the value of col2, like this
SELECT a.id AS col1, b.id AS col2, b.title AS col3, (
SELECT a.name
FROM tbl1 a
INNER JOIN tbl2 b ON a.tbl2_id=b.id
WHERE a.id= [value of col2]
)
FROM tbl1 a INNER JOIN tbl2 b ON a.tbl2_id=b.id
how can i achived to pass the value of col2 in subquery? Thanks in advance
A:
If I just transform your query to do what you want, here is the result:
SELECT
a.id AS col1,
b.id AS col2,
b.title AS col3,
t.name
FROM
tbl1 a
JOIN tbl2 b ON a.tbl2_id = b.id
JOIN (
SELECT a.id, a.name
FROM tbl1 a
JOIN tbl2 b ON a.tbl2_id = b.id
) t ON t.id = b.id;
But I'm pretty sure you can simplify it. It is not clear to me as to why you want to join it that way, but probably it is because the table names are not the real ones.
A:
Also you can use the MySQL User-Defined Variables in order to avoid multiple joins and keep the code "more readable"
SELECT a.id AS col1, @myBCol2Value := b.id AS col2, b.title AS col3, (
SELECT a.name
FROM tbl1 a
INNER JOIN tbl2 b ON a.tbl2_id = b.id
WHERE a.id = @myBCol2Value
) AS name
FROM tbl1 a
INNER JOIN tbl2 b ON a.tbl2_id = b.id;
|
[
"stackoverflow",
"0024749703.txt"
] | Q:
using terminal message to run other script
Is it possible to read message from terminal command to run other script ?
What i am doing is i need to know where the asterisk server is connected to pstn line or not. if it is not connected,some other script is run to notify me. It is possible to know the connection between asterisk and pstn line by simply running
[root@localhost] # isdahdi
terminal comment in asterisk server.
If the pstn line is not connected it return
.###Span 1: OPVXA1200/12 "OpenVox A1200P/A800P Board 13" (MASTER)
1 FXO FXSKS (In use) (EC: OSLEC - INACTIVE) RED
2 FXO FXSKS (In use) (EC: OSLEC - INACTIVE) RED
3 FXO FXSKS (In use) (EC: OSLEC - INACTIVE) RED
4 FXO FXSKS (In use) (EC: OSLEC - INACTIVE) RED
"RED" means no pstn line connected to this port. Can i read this message "RED" to run other ?
A:
You can use awk:
lsdahdi | awk '{if ($NF == "RED") print "pstn line ", $1, "is red"}'
|
[
"stackoverflow",
"0018163337.txt"
] | Q:
Don't understand \n
Could you help me understand how to get the same result by usint \n?
System.out.println("Balance: " + balance);
System.out.println();
I have tried something like
System.out.println("Balance: " + balance +\n);
Not working. Don't know whether it is possible or not.
A:
System.out.println("Balance: " + balance +"\n");
|
[
"electronics.stackexchange",
"0000175442.txt"
] | Q:
Wire wrap wire quality
I am looking at two kinds of wire wrap wire on Amazon.
The first is the better, more expensive kind. It is silver plated and has Kynar insulation.
http://www.amazon.com/dp/B006C4AGMU/ref=biss_dp_t_asn
The second is a cheaper kind. It is tin plated and has PVC insulation.
http://www.amazon.com/Amico-B-30-1000-Plated-Copper-Wrepping/dp/B008AGUDEY
I am going to use it to connect to LED leads, which are tin plated brass, I believe. So I do not see much advantage in using silver plated wire if the LED leads are not as equally noble. Or am I wrong?
The application temperature should never exceed 150 Fahrenheit (65 Celsius), so is there much advantage to Kynar insulation?
Will it be just as well to buy the cheaper wire for this application?
EDIT: Additional info.
I will be using this hand tool for both stripping and wrapping: Jonard WSU-30M
on page 34 here and how to use here
The LEDs are not specifically made for wire wrapping, but they have fairly stiff tin plated brass leads that have a square profile section, although the edges may not be as sharp as purpose built WW leads.
I did not want to solder to the LEDs at all - using wire wrap only. At least that's the hope.
A:
If you're using an automatic cut-strip-wrap bit on the wire-wrap gun, use the Kynar wire. Otherwise it cuts, strips, and wraps, but not necessarily in the correct order! I had trouble with the wrong wire - often there were breaks inside the first (insulated) turn round the pin, so a successful looking joint was actually open circuit.
If you're using the normal (much slower to use) bits, or a hand wire-wrap tool, then either wire is fine.
The wire wrap gun, with the correct bit, correct wire, and practice was probably 20x as fast, maybe more.
The gun I used was the green Gardner-Denver one, (eBay link) only 240V. Nowadays available from Weller (price has gone up a little). As for the bit ... I couldn't get their own Cut/Strip/Wrap bit to work worth a damn. In the end I had to phone a friend at the BBC and beg him to find the right part : from an obscure British manufacturer probably long gone. However the tool looked a bit like a precision-made version of (and worked like) this description.
I can vouch for the gun. I can't remember any longer whether Kynar or Tefzel wire gave the higher success rate. And I can't vouch for the specific bit shown, though it operates on the right principle (you insert the wire at the tool end, and the cutter 1 inch up trims it to length). But I do remember the right tool, wire, and technique gave a huge productivity increase and very low failure rate.
A:
This is a solid "don't know - BUT":
That's a very large $/foot difference 12c versus 0.9C or ~= 13:1!
However, if the total length needed was under 1 roll I'd be tempted to use the Kynar.
If it was a large installation where cost started to be annoying I'd do some more research. The question needs more information than provided to be answered really well - see below.
The result depends on how you are terminating the wire, which you don't actually say. eg are you using a strip and wrap tool, or manual strip then wrap, or soldering? IF strip and wrap tool - are the LEDs specifically manufactured for wire wrapping use? (pin edge "sharpness" matters).
FWIW - you MIGHT manage to solder through PVC, although not at all recommended. You cannot solder through Kynar. You can wrap Kynar around a heated soldering iron tip and, while it gets very sad looking, it maintains its insulation, more or less - great stuff :-).
Kynar is nasty to hand strip, and 30 gauge wire is somewhat fragile (both are 30 gauge) but OK once in place. If you are soldering it then the tinned copper is fine. If wrapping then you are presumably stripping it first then relying on the wrapped pin to wire contact. WW pins have sharp edges designed to bite into the wire. LED pins maybe be of square section are not intended to do this (AFAIK). Any help you can give an unsoldered join may keep it alive years on (or even months on in some environments).
Vaguely relevant: Long ago the then sole national NZ telecom company started using a copper to copper twist join inside a melted PVC outer. In the variable fullness of time this was found to be a fatally bad idea and all such joins had to be re-made. They were then soldered.
|
[
"stackoverflow",
"0040797346.txt"
] | Q:
Yii2: phpoffice/phpexcel identifies .xlsx file as HTML
I am trying to use phpoffice/phpexcel plugin for my yii2 project to read excel files. For files with xls extension, the plugin works perfectly and I am able to read the contents but when I try using files with japanese filenames and xlsx extension, it gives me an error like so:
DOMDocument::loadHTML(): Invalid char in CDATA 0x3 in Entity, line: 1.
I tried investigating PHPExcel_IOFactory::identify. Investigating the function, I came across createReaderForFile in the IOFactory class and when checking the extension type set here, it says 'Excel2007' but for some reason, at the very end of the process, the file is still identified as HTML.
To further depict the issue, my files have different extensions and names but basically the same content like so:
col1 col2 col3
aaaa bbbb cccc
The files are as follows:
あああ.xls (can be read)
あああ.xlsx (can't be read)
aaaa.xls (can be read)
aaaa.xlsx (can be read)
Only あああ.xlsx can't be read but the rest are fine. Is this some sort of limitation to the phpoffice/phpexcel plugin? If it is, can you suggest other yii2 extensions that will enable me to read both xlsx and xls files properly? Or is there some way to fix this so that it can correctly identify the files?
A:
I've managed to fix this now. The issue seems to be something about the file encoding on zip file, adding \PHPExcel_Settings::setZipClass(\PHPExcel_Settings::PCLZIP); before PHPExcel_IOFactory::identify fixed it.
|
[
"stackoverflow",
"0000625512.txt"
] | Q:
Set some web directories as restricted directories
I am doing PHP web application, with Apache.
There are a few configuration files ( like App.yml) whose content I don't want to expose to users under whatsoever circumstances. Is there anyway that I can tweak my Apache setting so that these files won't be available when hostile users query for them?
A:
The best option would be to place the files outside of your document root.
If that's not possible, you can deny access to them in apache .conf file (or a .htaccess file) with
<Directory /path/to/dir>
Deny from all
</Directory>
|
[
"stackoverflow",
"0061143608.txt"
] | Q:
Running e2e test cases using proactor and getting session not created
I wrote e2e test cases using a protractor. It was working fine a few days before. But now while running the test cases.
I am getting:
session not created: This version of ChromeDriver only
supports Chrome version 81 (Driver info: chromedriver=81.0.4044.69
I have already installed Google Chrome version 81. Then also I am getting the same error. I tried re-installing the node_modules but not worked.
This the configuration of my protractor.conf.json file:
const { SpecReporter } = require('jasmine-spec-reporter');
const config = require('./protractor.conf').config;
const puppeteer = require('puppeteer');
/**
* @type { import("protractor").Config }
*/
exports.config = {
allScriptsTimeout: 11000,
specs: [
'./src/**/*.e2e-spec.ts'
],
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: [ "--headless", "--no-sandbox" ],
binary: puppeteer.executablePath()
},
},
directConnect: true,
baseUrl: 'http://localhost:4200/',
framework: 'jasmine',
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 30000,
print: function() {}
},
onPrepare() {
require('ts-node').register({
project: require('path').join(__dirname, './tsconfig.json')
});
jasmine.getEnv().addReporter(new SpecReporter({ spec: { displayStacktrace: true } }));
}
};
A:
Because you specify binary: puppeteer.executablePath(), this means protractor will use the browser provided by npm package puppeteer, not the browser installed by yourself.
So the issue is the version of chrome browser provided by 'puppeteer' is not 81. To make it to version 81 or change the chromedriver version to compatible with the current 'puppeteer' chrome browser. Or remove this line binary: puppeteer.executablePath() to rely on the browser which have to pre-installed on test machine manualy.
|
[
"stackoverflow",
"0010935836.txt"
] | Q:
How to output multiple hash tables in Powershell
I have a hashtable of hashtables of key/value pairs (from a .ini file). It looks like this:
Name Value
---- -----
global {}
Variables {event_log, software_repo}
Copy Files {files.mssql.source, files.utils.source, files.utils.destination, fil...
How can I output all of the key/value pairs in one hash table, instead of doing this?
$ini.global; $ini.variables; $ini."Copy Files"
A:
Given that $hh is you hashtable of hashtables you can use a loop :
foreach ($key in $hh.keys)
{
Write-Host $hh[$key]
}
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.