Place Recognition Systems provide the ability for vision systems to detect when they have revisited a previous location. Central to this is the use of image descriptors to summarize the visible scene in an image for future comparison. Current applications of VPR like self driving vehicles push the limits of existing techniques, requiring long periods of operation that cover large and diverse geographical areas. This introduces new challenges like illumination changes due to the time of day/night and seasonal variations. Neural-network-based image descriptors provide promising improvements in this area but a trade-off has emerged: training for robustness to appearance changes or viewpoint changes between visits to a scene. One approach is in the use of synthetic views, which decouple the two problems by allowing the difference in viewpoint to be artificially corrected. Here we evaluate a promising method (Neural Radiance Fields, or NeRF) when trained on a series of images captured from a trajectory representative of real-world application in VPR. We compare the frames produced to ground truth frames with the same viewpoint to gauge the performance that can be expected and the potential issues of applying this technique. Overall we find promising examples where the quality of the synthetic frames may allow for use in VPR. We also suggest future work, improving on the quality and reliability of the views obtained.