\documentclass[fleqn,usenatbib]{mnras} \usepackage[T1]{fontenc} \usepackage{ae,aecompl} \usepackage{graphicx} % Including figure files \usepackage{amsmath} % Advanced maths commands \usepackage{amssymb} % Extra maths symbols \usepackage{float} % % 2020.03.28_16:08:15 % %% \usepackage{xcolor} \newcommand{\getlength}[1]{\ifx#1\end \let\next=\relax \else\advance\count255 by1 \let\next=\getlength\fi \next} % % ----- \length -- LEARN THE LENGTH OF THE ARGUMENT % \newcommand{\length}[1]{ \count255=0 \getlength#1\end } % % ----- \ifnularg}{} -- VERIFY: IS THE ARGIMENT EMPTY % \newcommand{\ifnularg}[1]{ \count255=0 \getlength#1\end \ifnum\count255=0 } % % % ----- \ifm -- VERIFY: IS IT MATH-MODE NOW? % \newcommand{\ifm}{\makebox{}\ifmmode} \long\def\ifundefined#1#2#3{\expandafter\ifx\csname #1\endcsname\relax#2\else#3\fi} % % ----- \beq -- BEGINNING OF FORMULA ARRAY % \newcommand{\beq} { \begin{eqnarray} } % % ----- \eeq -- END OF FORMULA ARRAY. THERE IS OBLIGATORY ARGUMENT: LABEL % \newcommand{\eeq}[1]{ \ifnularg{#1} end{eanarray} \else \label{#1}\end{eqnarray} \fi } \newcommand{\eeql} { \end{eqnarray} } \newcommand{\eeqn} { \nonumber \end{eqnarray} } \newcommand{\Frac}[2]{\frac{\displaystyle\strut #1}{\displaystyle\strut #2} } \newcommand{\lp}{ \left( } \newcommand{\rp}{ \right) } \newcommand{\dss}{\displaystyle} \newcommand{\Cov}{ \mathop{ \rm Cov }\nolimits } \newcommand{\un}[1]{\underline{#1}} \newcommand{\nc}[1 ]{ \multicolumn{1}{c}{#1} } \newcommand{\nl }[1 ]{ \multicolumn{1}{l}{#1} } \newcommand{\nr}[1 ]{ \multicolumn{1}{r}{#1} } \newcommand{\ntab}[2]{ \multicolumn{1}{#1}{#2} } \newcommand{\nntab}[2]{ \multicolumn{2}{#1}{#2} } \newcommand{\nnntab}[2]{ \multicolumn{3}{#1}{#2} } \newcommand{\nnnntab}[2]{ \multicolumn{4}{#1}{#2} } \newcommand{\Number}[1]{\ifnum#1<10\relax0\number#1\else\number#1\fi} \newcommand{\isodate}{ \count151=\time \divide\count151 by 60 \count151=\count151 \multiply\count151 by 60 \count152=\time \advance\count152 by -\count151 \divide\count151 by 60 \count152=\count151 \multiply\count151 by 60 \count153=\time \advance\count153 by -\count151 \Number{\year}.\Number{\month}.\Number{\day}--\Number{\count152}:\Number{\count153} } \definecolor{Dred}{rgb}{0.312,0.070,0.070} \definecolor{Dblue}{rgb}{0.070,0.070,0.312} \definecolor{Dgreen}{rgb}{0.070,0.312,0.070} \definecolor{Db}{rgb} {0.050,0.0,0.320} \newcommand{\Gr}[1]{\textcolor{Dgreen}{#1}} \newcommand{\Bl}[1]{\textcolor{Dblue}{#1}} \newcommand{\Rd}[1]{\textcolor{Dred}{#1}} \newcommand{\Grb}[1]{\textcolor{Dgreen}{\bf #1}} \newcommand{\Blb}[1]{\textcolor{Dblue}{\bf #1}} \newcommand{\Rdb}[1]{\textcolor{Dred}{\bf{#1}}} \newcommand{\atca}{\mbox{\sc atca-\small{104}}} \newcommand{\ceduna}{\mbox{\sc ceduna}} \newcommand{\tid}{\mbox{\sc dss\small{45}}} \newcommand{\hobart}{\mbox{\sc hobart\small{26}}} \newcommand{\mopra}{\mbox{\sc mopra}} \newcommand{\parkes}{{\sc parkes}} \newcommand{\lavlba}{{\sc la--vlba}} \newcommand{\ovvlba}{{\sc ov--vlba}} \newcommand{\SNR}{{\mbox{\rm SNR}}} \newcommand{\PIMA}{$\cal P\hspace{-0.067em}I\hspace{-0.067em}M\hspace{-0.067em}A\hspace{-0.1em}$ } \newcommand{\Gaia}{{\it Gaia}} \newcommand{\Fermi}{{\it Fermi}} % \newcounter{note} \setlength{\marginparwidth}{30mm} \let\oldmarginpar\marginpar \renewcommand\marginpar[1]{\-\oldmarginpar[\raggedleft\footnotesize #1]% {\raggedright\footnotesize #1}} % \newcommand{\Note}[1]{\Rdb{#1}{\addtocounter{note}{1}% \marginpar{\small\underline{\Rdb{Corr \arabic{note}}}}}} \newcommand{\note}[1]{\Rdb{#1}} % \renewcommand{\note}[1]{#1} \renewcommand{\Note}[1]{#1} \volume{485} \pubyear{2019} \pagerange{88--101} \setcounter{page}{88} %\renewcommand{\today}{xxx} \begin{document} % Data calibration were made in AIPS (Greisen 2003) % and independently in PIMA (Petrov et al. 2011a) software % packages. We fringe-fitted all the sources separately. Fringe- % fitting in PIMA was done using baseline-based approach. % Our detection statistics and coordinates of the objects are % based on the PIMA output. In AIPS, we performed the % global fringe-fitting. Separate solutions were found for each % IF; the minimum signal-to-noise ratio for detection was set % to 4. Such a low SNR threshold was chosen because there % are a lot of sources near or below the detection limit in our % complete sample; however, a source was treated as detected % nd the data for it were used in subsequent analysis, only % if PIMA detection is robust. We used the data processed in % AIPS for flux measurements, imaging and modeling. \subsection{Fringe fitting and preprocessing} Visibility data were processed with fringe fitting software PIMA \citet{2011AJ....142...35P}. The fringe fitting procedure estimates phase delay rate, group delay, and group delay rate by using the spectrum of cross correlation function also known as visibility data. Then the estimates of phase delay rate, group delay, and group delay rate are applied to visibilities, which causes their phase rotation, and the visibilities are averaged over time and frequency within each intermediate frequency (IF). Averaging over time was initially made at intervals of 8.4~s long, but it was further increased for weak sources during consecutive stages of data analysis. Upon fringe-fitting completion, the data were exported to NASA VLBI analysis software VTD/pSolve. A model that includes source coordinates and a clock function presented as a B-spline of the first degree with one hour stride between knots for all the stations but the one taken as the reference was fitted to X-band and S-band group delays in two separate least square (LSQ) solution. Initially, group delays with the SNR less than 5.5 were suppressed. The SNR is defined here as the ratio of the fringe amplitude averaged over time and frequency of a given observation after applying phase delay rate, group delay, and group delay rate to the mean visibility amplitude. The outliers were eliminated by using a recursive algorithm that starts from the observation with the largest normalized residual. After suppression of each observation, the solution was updated. The iterations were terminated when the largest normalized residuals became less than $6\sigma$. Then the parametric model was extended. Estimation of positions of all stations, but the reference one and estimation of path delay in zenith direction was added. Atmospheric path delay is modeled with a B-spline of the first degree with one hour stride between knots. The outlier elimination procedure was repeated till the largest normalized residuals became less than $4.5\sigma$. Then the opposite procedure for restoration of previously suppressed observations was performed: the suppressed observations with the SNR $\ge 5.5$ and with normalized residuals less than $4.5\sigma$ were restored by the iterative procedure starting with observations with the smallest normalized residuals. A source is considered detected if the number of its observations used in the solution, i.e. not suppressed, is at least 3. Estimation of right ascension and declination takes two degrees of freedom. Three observations provide a minimum redundancy. Group delays of detected observations have the Gaussian distribution with the standard deviation less than 0.1~ns, while group delays of non-detected observation have the uniform distribution in a range [-4000, 4000]~ns. Astrometric LSQ solution provides a powerful filter of false detections since the distribution of group delay estimates for non-detected sources is very different from the distribution of detected sources. The probability that a given source had two detections and the third observation was not detected but its group delay by chance appeared within $4.5 \sigma$ of the a~posteriori path delay derived from analysis of the experiment (i.e. in a range of approximately [-0.5, 0.5] ns at S-band and [-0.1, 0.1] ns at X band) is $0.5/4000 = 1.2 \cdot 10^{-4}$ at S-band and $0.1/4000 = 2.5 \cdot 10^{-5}$ at X-band respectively. Then the sources that have more than two observations with SNR $\ge 5.5$, but less than two observations that are not suppressed were further evaluated. It may happen that the outlier elimination procedure kept one or more non-detected observations and eliminated detected ones. A non-detection may ``poison'' the least square solution and cause large errors in computation of residuals, which prevented elimination of non-detections. Different combinations of flagging these observations were tried, and if a combination that left 3 or more observations with normalized residuals less than $4.5\sigma$ was found, such flags were retained. Then the SNR cutoff limit was lowered from 5.5 to 5.0 and the procedure for restoration of suppressed observations was repeated. After that all suppressed observations of detected sources (i.e. those with 3 or more retained observations) were re-fringed with a narrow search window. The a~posteriori path delays were computed using results of the preliminary astrometric solution. The fringe fitting procedure was repeated with the narrow search window within 3~ns of the a~posteriori group delay at S-band and within 1~ns of the a~posteriori group delay at X-band. The astrometric solution was repeated and those observations with SNR $\ge 4.8$ that after re-fringing had normalized residuals less than $4.5\sigma$ were un-flagged and used for further analysis. \subsection{Absolute astrometry} Observations of the VLBA Northern Polar Cap survey were also used for absolute astrometry. They were processed in a similar way as the VLBA Calibrator surveys \citep{2008AJ....136..580P}. All dual-band geodetic VLBI data from 24-h observing sessions since 1980.04.01 through 2020.03.09, in total 6498 experiments, and three observing sessions of this survey were processed in three least square runs. The first run used X/S band data from this survey, the second run used X-band data, and the third run used S-band data. The number of detected target sources from the survey used in these solutions is 104, 109, and 154 respectively. Estimated parameters are split into three categories: global parameters such as station positions, station velocities, and source coordinates; session-wide parameters, such as pole coordinates, UT1 angle, their time derivatives, and nutation angle offsets; and segment-wide parameters, such as clock function and atmospheric path delay in zenith direction. The segment-wide parameters are modeled with a B-spline with time span of 1 hour. For accounting systematic errors, we computed weights in the following way: % \beq w = \Frac{1}{k \cdot \sqrt{\sigma_g^2 + a^2 + b^2(e)}}, \eeq{e:e1} % where $\sigma_g$ is group delay uncertainty, $k$ is the multiplicative factor, $a$ is the elevation-independent additive weight correction and $b$ is the elevation-dependent weight correction. We used $k=1.3$ based on the analysis of VLBI-Gaia offset \citep{r:gaia4}. For processing dual-band observations we used $b(e)^2 = \beta (\tau(e_1)_{\rm atm,1}^2 + \tau(e_2)_{\rm atm,2}^2)$, where $\tau(e_i)_{\rm i,atm}$ is the atmospheric path delay at the $i$th station. We used $\beta=0.02$ in our work. For processing single band observations we computed the ionospheric delay using Total Electron Contents (TEC) maps from analysis of Global Navigation Satellite System (GNSS) observations. Specifically, we used CODE TEC time series \citep{r:schaer99}\footnote{Available at \href{ftp://ftp.aiub.unibe.ch/CODE}{ftp://ftp.aiub.unibe.ch/CODE}} with a resolution of $5^\circ \times 2.5^\circ \times 2^h$. However, the TEC maps accounts only partially for the ionospheric path delay due to coarseness of their spatial and temporarily resolution. In order to account for the contribution of residual ionosphere-driven errors, we used the same approach as we used for processing single-band Long Baseline Array observations \citep{r:lcs2}. We computed variances of the mismodeled contribution of the ionosphere to group delay in zenith direction for both stations of a baseline, $\Cov_{11}$ and $\Cov_{22}$, as well as their covariances $\Cov_{12}$. Then for each observation we computed the predicted rms of the mismodeled ionospheric contribution as {\small \beq b^2_{\rm iono}(e) = \gamma \left( \Cov_{11}^2 \, M_1^2(e) - 2 \Cov_{12} \, M_1(e) \, M_2(e) + \Cov_{22}^2\, M_2^2(e) \right), \hspace{-0.75em} \eeq{e:e3}} % where $M_1(e)$ and $M_2(e)$ are the mapping function of the ionospheric path delay. We used $\gamma=0.5$ in our analysis and added $b^2_{\rm iono}(e)$ to $b(e)^2$ when processed single-band observations. Additive parameter $a$ was found by in iterative procedure that makes the ratio of the weighted sum of post-fit residuals to their of their mathematical expectation close to unity. We compared positions of 86 sources derived using X-band and S-band only observations that were observed only in the Northern Polar Cap observations with dual-band positions. The position differences normalized over single-band position uncertainties fit to the Gaussian distribution over right ascensions and declination with the zero mean and the 2nd moment $0.5$ for X-band positions, $0.8$ for S-band right ascensions, and have a positive bias of $+10$~mas and 2nd moment $1.0$ for S-band declinations. Since the 2nd moment of the distribution of normalized differences does not exceed 1, we conclude the formal uncertainties correctly accounts for ionosphere-driven systematic errors. The declination bias in S-band positions was applied in the catalogue. Although there were other observations of the sources detected in the Northern Polar Cap sine 2006, were preset in Table~\ref{??} the positions derived from observations of this campaign. The modern positions of the target sources can be found in the Radio Fundamental Catalogue\footnote{Available at \href{http://astrogeo.org/rfc}{http://astrogeo.org/rfc}} that is updated on a quarterly basis. \bibliographystyle{mnras} \bibliography{npcs} \end{document}